2

Suppose we have 2 competing theories X and Y.

Now suppose theory X is confirmed (per Bayesian Confirmation Theory) and Y is disconfirmed, when are we justified in saying “we know X is true”?

Is there a certain threshold? Or does the mere confirmation lead to Knowledge?

16
  • 3
    That's when P(X) = 1 Commented Aug 25 at 1:08
  • 3
    In Science we NEVER know that X is tue. In Philosophy anything might happen :-). Commented Aug 25 at 3:28
  • 3
    Generally, in Bayesian paradigms, we are working on probabilities in (0,1) not inclusive. If your prior or posterior are 0, it's likely that this info is baked in a priori (i.e. the 0 bits aren't "known" via Bayesian reasoning, they are "known" via some meta process, logic, etc). So, if by "knowledge", you mean "certainty", the Bayesianism doesn't provide you with any. If by "knowledge" you mean "pretty darned sure", then Bayesian frameworks can help with that. Commented Aug 25 at 15:10
  • 1
    @Carla_, not to my knowledge. P(X) = 1 means X is ceertain Commented Aug 26 at 0:10
  • 2
    Note that by Shannon, Bayesian Inference does not generate "knowledge". The number of "degrees of freedom" is not increased. You don't have new information. Commonly: calculating the mean of set of measurements does not generate "knowledge" Commented Aug 26 at 10:39

5 Answers 5

5

If we are in a position to use loose, everyday talk, then we could say we know (theory X is true and theory Y is false) if we have sufficient evidence for theory X ("more" than for theory Y) and if the two theories somehow are incompatible. Bayesian modeling is a simplified, specific, non-exclusive way to formalize and quantify what it means to have "more" evidence (enough to go with one hypothesis or theory rather than another).

One big problem is that it depends on fixing the probability of a "prior": the probability of the hypothesis (or a complete theory). This is never trivial and Bayesian theory itself provides no justification for any prior. Another big issue is that it has no good or accepted way to revise an initial prior later (after performing several updates, given incoming data). Once you selected one, you're locked in. So, it's very dubious if Bayesian modeling accurately represents the process of how we (or other sentient creatures) rationally update our beliefs (in everyday life or in science).

Given these kind of intrinsic problems, it seems best to only use Bayesian theory selection as a heuristic tool: it may help in selecting a hypothesis (or theory) that seems more promising, for the time being, but we should be prepared to overhaul the complete theory if later evidence doesn't fit (where "not fitting" may not be completely captured by the Bayesian model). Any "knowledge" it gives may only be temporary, and any underlying, deeper explanation of phenomena needs to be sought in explaining why those priors would be what we assume them to be. (For instance in physics, this would require deeper or more general causal theories.)


For an explanation of how threshold values can be calculated in a Bayesian framework, see for instance: Roberto Trotta, Bayes in the Sky: Bayesian inference and model selection in cosmology. Note the reference to Occam's razor. Bayesian modeling tries to capture part of the meaning of favoring the "simplest" theories:

Bayesian model comparison offers a formal way to evaluate whether the extra complexity of a model is required by the data, thus putting on a firmer statistical grounds the evaluation and selection process of scientific theories that scientists often carry out at a more intuitive level. For example, a Bayesian model comparison of the Ptolemaic model of epicycles versus the heliocentric model based on Newtonian gravity would favour the latter because of its simplicity and ability to explain planetary motions in a more economic fashion than the baroque construction of epicycles.

All very true, but I would just note that the heliocentric model did not "win" because of a Bayesian comparison, and also not simply because it was simpler, but because it gave a more satisfactory (and in the end also more accurate) explanation of celestial mechanics, including the strange retrograde movements of the wandering stars. The Ptolemaic model had held out for a very long time because it was very accurate -- it also allowed indefinitely many adjustments to keep it accurate! --, but it basically provided no explanation for the movements of planets, but merely a kinematic, mathematical model.

1

In Bayesian statistics we have the following ...
H = Hypothesis
E = Evidence

P(H) = Prior probability of hypothesis
P(E|H) = Likelihood
P(E) = Probability of evidence
P(H|E) = Posterior probability

Note: P(E) = P(H) x P(E|H) + P(~H) x P(E|~H)

P(H|E) = [P(H) x P(E|H)]/P(E)

I haven't much experience with Bayesian probability but if P(H|E) = 1 is the best-case scenario. That happens when P(~H) x P(E|~H) = 0.

It is possible that in actual practice there is a threshold for P(H|E), but I'm unaware of it.

0
0

Epistemology is about the creation and assessment of knowledge. Bayesian epistemology (BE) isn't really about either of those issues and so isn't epistemology.

BE starts with some ideas that were created somehow, to which probabilities have somehow been assigned, then states that somehow those probabilities are updated and this is somehow relevant to judging those theories. Every place where somehow occurs in that sentence is a place where there should be an explanation that is absent.

In reality this doesn't make any sense for several reasons. First, a probability is assigned to some element in a set of possibilities. To get the set of possibilities you need some explanation to which BE claims you would assign some probability to judge it. There is no way of picking a measure over the set of all possible theories nor is there any way of specifying the set of all possible theories so this doesn't work.

Second, in general the rules of probability aren't always applicable to reality as described by existing physical theories, notably it fails in quantum interference experiments, see Section 2 of

https://arxiv.org/abs/math/9911150

Third, there are well known results in probability that make numbers obeying the rules of probability unsuitable as a measure for assessing theories, such as the Popper Miller theorem

https://royalsocietypublishing.org/doi/abs/10.1098/rsta.1987.0033

Fourth, on a broader level the idea of supporting theories doesn't make any sense. An argument uses assumptions and rules that are supposed to take true assumptions to true results. But in reality any assumption or rule is either correct or incorrect and if you haven't found a flaw in an assumption or rule all that implies is that you haven't found it not that it doesn't exist. This was pointed out decades ago by Popper and a long time before him by other philosophers. The creation of knowledge actually starts by looking for flaws in our ideas, proposing solutions to those flaws and criticizing the solutions until we find one with no known flaws despite attempts to find such flaws: Popper called this critical rationalism. This explains where we get ideas: they are attempts to correct flaws in existing ideas and how to judge them by whether they solve correct those flaws and have no known relevant flaws.

For a reading list on Popper see

https://fallibleideas.com/books#popper

For some development of Popper's ideas see

https://criticalfallibilism.com/

2
  • Do you think that my answer has any relation to Popper's ideas? I'm curious to know your response. I do think that Bayesianism doesn't capture the essence of explanation properly, but I'm also not sure if Popper's methodology is correct either. I suppose I need to learn more but was curious about your thoughts Commented Aug 26 at 12:14
  • You might want to look at "Realism and the aim of science" by Popper. Commented Aug 26 at 15:34
-1

When diagnosing the reason why a car battery has become flat whilst you are driving the car then a theory that reason why it has become flat is either some part of the car is drawing to much power from the alternators capability to charge the battery, the battery is old or the alternator is not doing its job or it is a combination of some of all three. The theory's are not determined by probability they are determined by logic. The knowledge of which theory is correct is gained from eliminating the theories which are false by testing those theories ie from replacing the alternator and testing it to see if it has solved the problem. The only random element is in which theory you test first. In this case it is more likely a fault with the alternator.

-2

Bayesianism does not generate new knowledge at all. In fact, we quite literally, in the history of science, have never used it to propose or confirm any new theory that explains what we see in the world.

Science is about explanations. It's about proposing mechanisms and theories that are hard to vary that fully predict and explain the results.

Bayesianism does not necessarily involve any models at all. It only involves predictions, but science has never purely relied upon predictions. It relies upon models that we conjecture to explain certain results.

The problem with Bayesianism is that once you remove the requirement of having a mechanism/model that explains the results, you can construct an infinite number of "theories" that "explain" anything. You can simply create a theory that "aliens cause the sun to rise every day" without explaining how, and per that theory, every observation of the sun rising every day "confirms" that theory.

This, for obvious reasons, is ridiculous.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.