Too long for a comment.
I am an applied statistician, I don't distinguish much between the two. Bayes theorem is non-controversial, it is a theorem anyway. The problem arises when people use Bayes theorem. Here we have observed event (the "data") and a hypothetical event (the "hypothesis"), call these $D$ and $H.$ By Bayes theorem $$ P(H \mid D) = \dfrac{P(D \mid H) P(H)}{P(D)}, $$ the left hand side is what is of interest, finding how likely is the hypothesis having observed the data. From the right hand side, usually the likelihood of the data knowing the hypothesis is well-known while $P(D)$ is also well know. The foundational problem of the Bayesian approach is that $P(H)$ is chosen solely at the discresion of the researcher which can then choose it to make the results "statistically significant." Other than this, there are not controversies.
The usage of Bayes theorem in many "Bayesian algorithms" is again non-controversial, and many "Bayesian approaches" should really be called "probabilistic approaches" which, by definition, are frequentist as I expand now on this point. Using measure theoretic tools for probability interpretations really means we are using the frequentist approach (e.g. consider the law of large numbers). Some people want to consider a "true" Bayesian approach in which probabilities are not long-run frequencies but rather subjective assessments on likelihood of events. These people seem to be blind in my estimation. If they are mathematically oriented and indeed use measure theoric approach to probability, as stated earlier, they are using the frequency approach to probability. If in contrast, they are using something else, then they are not doing statistics based on probability theory and we may as well say they are doing mumbo jumbo.
There is another very silly (to say the least) argument about parameter. Sometimes people bring that Bayesians think of parameters as random quantities, and so on. This is a superficial remark, it is really nothing. In both views you choose a probability model for your experiment. There is nothing that says that one should be preferred over the other. The frequentist may choose (rather subjectively) that $n$ throws of a coin obey the binomial distribution with some unknown deterministic parameter $p;$ in symbols, the frequentist rather arbitrarily imposes the model as follows $$ X \mathsf{Bin}(n; p), $$ with $n$ known and $p$ to be estimated. The Bayesian may choose a hierarchical model in which the parameter is $p$ follows a Beta distribution and once that is fixed, we throw the coin as binomial of parameter $p;$ in symbols, $$ p \sim \mathsf{Beta}(\alpha, \beta) \\ X \mid p \sim \mathsf{Bin}(n, p). $$ The preference for either of these two approaches is rather subjective and both can be assessed empirically as to which one provides a better accuracy, or predictive power for the task at hand, so this debate in my view is a purely juveline tirade.