10
$\begingroup$

Perhaps, roughly, I might be described as advanced undergraduate regarding mathematics. However, I have not learned statistics and have only learned elementary probability. Does there exist a book or monograph that introduces probability and statistics at this level and still covers frequentist and bayesian views (philosophy?) in a balanced manner?

It appears to me (but please correct me if I am wrong -- as I have stated I haven't learned this yet) that introductions at this level usually fully adopt a frequentist view and don't really broach the subject. On the other hand, bayesian books appear to be pitched to a more experienced audience and/or are perhaps even more unbalanced, in the sense that they seem anti-frequentist as much as pro-bayesian.

To more fully describe my mathematical maturity, I am comfortable with the normal calculus sequence (although somewhat rusty), basic linear algebra, basic set theory, mathematical logic, computability theory, some abstract algebra, very little category theory. I am comfortable with the level of introductory analysis, but have not completed it, and I am not versed in measure theory (I expect I could handle measure theory, but knowledge of it shouldn't be assumed). I am often interested in foundational topics and a philosophical viewpoint, and for example I particularly enjoy reading Peter Smith (e.g. An Introduction to Gödel’s Theorems).

$\endgroup$
1
  • $\begingroup$ Chapter 1 of Leon-Garcia uses appropriate brevity to discuss "relative frequency" in relation to the "law of large numbers." However, your question suggests there are two different ways of teaching probability, which is not quite true. I believe most (perhaps all?) of the "Bayesian versus frequentist" debate is related to advanced problems of statistical estimation (involving various ways to model the thing to be estimated). Everyone in the debate relies on a common basic theory of probability. For example, L-G has prob theory in Ch 1-7 and statistics in Ch 8. $\endgroup$ Commented May 4, 2022 at 21:46

2 Answers 2

9
$\begingroup$

I would recommend the following 16-page article from 36 years ago, which is easily accessible to any upper-level undergraduate:

"Controversies in the Foundations of Statistics", Bradley Efron, The American Mathematical Monthly, vol. 85, 1978, pp. 231-246.

This won an MAA Writing Award.

In my inexpert opinion, it is quite unsettling. It does not offer a resolution.


Edit: The above-mentioned article impressed me when I was a student. But there has been much follow-up. For example:

"A Two-Hundred-and-Fifty-Year Argument", Bradley Efron, LASR 2011 — Next Generation Statistics in Biosciences (Proceedings, 30th Leeds Annual Statistical Research Workshop), 2011

"Bayes Theorem in the Twenty First Century", Bradley Efron, Science 340, June 7, 2013

${}$


Further edit: Oops, my ignorance is showing.

B. Efron was president of the ASA in 2004; see his presidential address "Bayesians, Frequentists, and Scientists" at https://efron.ckirby.su.domains/papers/2005BayesFreqSci.pdf.

He won the 2014 Guy Medal in Gold.

See also his CV (WaybackMachine) and this previous Math.SE Answer about his work on bootstrapping, etc.

$\endgroup$
3
  • 3
    $\begingroup$ Thank you. I have skimmed the articles and they look interesting. I'll read them more carefully when I have the chance. However, what I'm really looking for is a book-level introduction to probability and statistics that treats frequentist and bayesian viewpoints (and perhaps Fisherian? hadn't heard that one until I looked at the first article) in a balanced manner. Note that it doesn't need to be completely focused on this, just not completely ignore it and/or assume one of the other viewpoints. Perhaps I'm barking up the wrong tree here. $\endgroup$ Commented Jun 18, 2013 at 13:33
  • 1
    $\begingroup$ Edit: Two broken links. Efron's ASA presidental address "Bayesians, Frequentists, and Scientists" can be found at efron.ckirby.su.domains/papers/2005BayesFreqSci.pdf, or via its DOI. His CV can be found with the Wayback Machine. And Efron's home page seems to be efron.ckirby.su.domains. $\endgroup$ Commented Apr 2, 2022 at 14:48
  • 1
    $\begingroup$ I know you've been around to do some review queue tasks, so I hesitated to make your suggested fixes myself. But I went for it. Feel free to rollback or amend my edits to your Answer. $\endgroup$ Commented May 4, 2022 at 20:47
1
$\begingroup$

Too long for a comment.

I am an applied statistician, I don't distinguish much between the two. Bayes theorem is non-controversial, it is a theorem anyway. The problem arises when people use Bayes theorem. Here we have observed event (the "data") and a hypothetical event (the "hypothesis"), call these $D$ and $H.$ By Bayes theorem $$ P(H \mid D) = \dfrac{P(D \mid H) P(H)}{P(D)}, $$ the left hand side is what is of interest, finding how likely is the hypothesis having observed the data. From the right hand side, usually the likelihood of the data knowing the hypothesis is well-known while $P(D)$ is also well know. The foundational problem of the Bayesian approach is that $P(H)$ is chosen solely at the discresion of the researcher which can then choose it to make the results "statistically significant." Other than this, there are not controversies.

The usage of Bayes theorem in many "Bayesian algorithms" is again non-controversial, and many "Bayesian approaches" should really be called "probabilistic approaches" which, by definition, are frequentist as I expand now on this point. Using measure theoretic tools for probability interpretations really means we are using the frequentist approach (e.g. consider the law of large numbers). Some people want to consider a "true" Bayesian approach in which probabilities are not long-run frequencies but rather subjective assessments on likelihood of events. These people seem to be blind in my estimation. If they are mathematically oriented and indeed use measure theoric approach to probability, as stated earlier, they are using the frequency approach to probability. If in contrast, they are using something else, then they are not doing statistics based on probability theory and we may as well say they are doing mumbo jumbo.

There is another very silly (to say the least) argument about parameter. Sometimes people bring that Bayesians think of parameters as random quantities, and so on. This is a superficial remark, it is really nothing. In both views you choose a probability model for your experiment. There is nothing that says that one should be preferred over the other. The frequentist may choose (rather subjectively) that $n$ throws of a coin obey the binomial distribution with some unknown deterministic parameter $p;$ in symbols, the frequentist rather arbitrarily imposes the model as follows $$ X \mathsf{Bin}(n; p), $$ with $n$ known and $p$ to be estimated. The Bayesian may choose a hierarchical model in which the parameter is $p$ follows a Beta distribution and once that is fixed, we throw the coin as binomial of parameter $p;$ in symbols, $$ p \sim \mathsf{Beta}(\alpha, \beta) \\ X \mid p \sim \mathsf{Bin}(n, p). $$ The preference for either of these two approaches is rather subjective and both can be assessed empirically as to which one provides a better accuracy, or predictive power for the task at hand, so this debate in my view is a purely juveline tirade.

$\endgroup$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.