Skip to main content
added 505 characters in body
Source Link
Joris Meys
  • 9k
  • 2
  • 41
  • 47

The problem starts with your sentence :

Examples based on incorrect prior assumptions are not acceptable as they say nothing about the internal consistency of the different approaches.

Yeah well, how do you know your prior is correct?

Take the case of Bayesian inference in phylogeny. The probability of at least one change is related to evolutionary time (branch length t) by the formula

$$P=1-e^{-\frac{4}{3}ut}$$

with u being the rate of substitution. 

Now you want to make a model of the evolution, based on comparison of DNA sequences. In essence, you try to estimate a tree in which you try to model the amount of change between the DNA sequences as close as possible. The P above is the chance of at least one change on a given branch. Evolutionary models describe the chances of change between any two nucleotides, and from these evolutionary models the estimation function is derived, either with p as a parameter or with t as a parameter.

You have no sensible knowledge and you chose a flat prior for p. This inherently implies an exponentially decreasing prior for t. It (It becomes even more problematic if you want to set a flat prior on t. The implied prior on p is strongly dependent on where you cut off the range of t.)

In theory, t can be infinite, but when you allow an infinite range, the area under its density function equals infinity as well, so you have to define a truncation point.

So you do for the prior. Now Now when you chose the truncation point sufficiently large, it is not difficult to prove that both ends of the credible interval rise, and at a certain point the true value is not contained in the credible interval any more. Unless you have a very good idea about the prior, Bayesian methods are not guaranteed to be equal to or superior to other methods.

ref: Joseph Felsenstein : Inferring Phylogenies, chapter 18

On a side note, I'm getting sick of that Bayesian/Frequentist quarrel. They're both different frameworks, and neither is the Absolute Truth. The classical examples pro Bayesian methods invariantly come from probability calculation, and not one frequentist will contradict them. The classical argument against Bayesian methods invariantly involve the arbitrary choice of a prior. And sensible priors are definitely possible.

It all boils down to the correct use of either method at the right time. I've seen very few arguments/comparisons where both methods were applied correctly. Assumptions of any method are very much underrated and far too often ignored.

EDIT : to clarify, the problem lies in the fact that the estimate based on p differs from the estimate based on t in the Bayesian framework when working with uninformative priors (which is in a number of cases the only possible solution). This is not true in the ML framework for phylogenetic inference. It is not a matter of a wrong prior, it is inherent to the method.

The problem starts with your sentence :

Examples based on incorrect prior assumptions are not acceptable as they say nothing about the internal consistency of the different approaches.

Yeah well, how do you know your prior is correct?

Take the case of Bayesian inference in phylogeny. The probability of at least one change is related to evolutionary time (branch length t) by the formula

$$P=1-e^{-\frac{4}{3}ut}$$

with u being the rate of substitution. Now you want to make a model, and you chose a flat prior for p. This inherently implies an exponentially decreasing prior for t. It becomes even more problematic if you want to set a flat prior on t. The implied prior on p is strongly dependent on where you cut off the range of t.

In theory, t can be infinite, but when you allow an infinite range, the area under its density function equals infinity as well, so you have to define a truncation point.

So you do. Now when you chose the truncation point sufficiently large, it is not difficult to prove that both ends of the credible interval rise, and at a certain point the true value is not contained in the credible interval any more. Unless you have a very good idea about the prior, Bayesian methods are not guaranteed to be equal to or superior to other methods.

ref: Joseph Felsenstein : Inferring Phylogenies, chapter 18

On a side note, I'm getting sick of that Bayesian/Frequentist quarrel. They're both different frameworks, and neither is the Absolute Truth. The classical examples pro Bayesian methods invariantly come from probability calculation, and not one frequentist will contradict them. The classical argument against Bayesian methods invariantly involve the arbitrary choice of a prior. And sensible priors are definitely possible.

It all boils down to the correct use of either method at the right time. I've seen very few arguments/comparisons where both methods were applied correctly. Assumptions of any method are very much underrated and far too often ignored.

EDIT : to clarify, the problem lies in the fact that the estimate based on p differs from the estimate based on t in the Bayesian framework when working with uninformative priors (which is in a number of cases the only possible solution). This is not true in the ML framework for phylogenetic inference. It is not a matter of a wrong prior, it is inherent to the method.

The problem starts with your sentence :

Examples based on incorrect prior assumptions are not acceptable as they say nothing about the internal consistency of the different approaches.

Yeah well, how do you know your prior is correct?

Take the case of Bayesian inference in phylogeny. The probability of at least one change is related to evolutionary time (branch length t) by the formula

$$P=1-e^{-\frac{4}{3}ut}$$

with u being the rate of substitution. 

Now you want to make a model of the evolution, based on comparison of DNA sequences. In essence, you try to estimate a tree in which you try to model the amount of change between the DNA sequences as close as possible. The P above is the chance of at least one change on a given branch. Evolutionary models describe the chances of change between any two nucleotides, and from these evolutionary models the estimation function is derived, either with p as a parameter or with t as a parameter.

You have no sensible knowledge and you chose a flat prior for p. This inherently implies an exponentially decreasing prior for t. (It becomes even more problematic if you want to set a flat prior on t. The implied prior on p is strongly dependent on where you cut off the range of t.)

In theory, t can be infinite, but when you allow an infinite range, the area under its density function equals infinity as well, so you have to define a truncation point for the prior. Now when you chose the truncation point sufficiently large, it is not difficult to prove that both ends of the credible interval rise, and at a certain point the true value is not contained in the credible interval any more. Unless you have a very good idea about the prior, Bayesian methods are not guaranteed to be equal to or superior to other methods.

ref: Joseph Felsenstein : Inferring Phylogenies, chapter 18

On a side note, I'm getting sick of that Bayesian/Frequentist quarrel. They're both different frameworks, and neither is the Absolute Truth. The classical examples pro Bayesian methods invariantly come from probability calculation, and not one frequentist will contradict them. The classical argument against Bayesian methods invariantly involve the arbitrary choice of a prior. And sensible priors are definitely possible.

It all boils down to the correct use of either method at the right time. I've seen very few arguments/comparisons where both methods were applied correctly. Assumptions of any method are very much underrated and far too often ignored.

EDIT : to clarify, the problem lies in the fact that the estimate based on p differs from the estimate based on t in the Bayesian framework when working with uninformative priors (which is in a number of cases the only possible solution). This is not true in the ML framework for phylogenetic inference. It is not a matter of a wrong prior, it is inherent to the method.

added 2 characters in body
Source Link
Joris Meys
  • 9k
  • 2
  • 41
  • 47

The problem starts with your sentence :

Examples based on incorrect prior assumptions are not acceptable as they say nothing about the internal consistency of the different approaches.

Yeah well, how do you know your prior is correct?

Take the case of Bayesian inference in phylogeny. The probability of at least one change is related to evolutionary time (branch length t) by the formula

$P=1-e^{-\frac{4}{3}ut}$$$P=1-e^{-\frac{4}{3}ut}$$

with u being the rate of substitution. Now you want to make a model, and you chose a flat prior for p. This inherently implies an exponentially decreasing prior for t. It becomes even more problematic if you want to set a flat prior on t. The implied prior on p is strongly dependent on where you cut off the range of t.

In theory, t can be infinite, but when you allow an infinite range, the area under its density function equals infinity as well, so you have to define a truncation point.

So you do. Now when you chose the truncation point sufficiently large, it is not difficult to prove that both ends of the credible interval rise, and at a certain point the true value is not contained in the credible interval any more. Unless you have a very good idea about the prior, Bayesian methods are not guaranteed to be equal to or superior to other methods.

ref: Joseph Felsenstein : Inferring Phylogenies, chapter 18

On a side note, I'm getting sick of that Bayesian/Frequentist quarrel. They're both different frameworks, and neither is the Absolute Truth. The classical examples pro Bayesian methods invariantly come from probability calculation, and not one frequentist will contradict them. The classical argument against Bayesian methods invariantly involve the arbitrary choice of a prior. And sensible priors are definitely possible.

It all boils down to the correct use of either method at the right time. I've seen very few arguments/comparisons where both methods were applied correctly. Assumptions of any method are very much underrated and far too often ignored.

EDIT : to clarify, the problem lies in the fact that the estimate based on p differs from the estimate based on t in the Bayesian framework when working with uninformative priors (which is in a number of cases the only possible solution). This is not true in the ML framework for phylogenetic inference. It is not a matter of a wrong prior, it is inherent to the method.

The problem starts with your sentence :

Examples based on incorrect prior assumptions are not acceptable as they say nothing about the internal consistency of the different approaches.

Yeah well, how do you know your prior is correct?

Take the case of Bayesian inference in phylogeny. The probability of at least one change is related to evolutionary time (branch length t) by the formula

$P=1-e^{-\frac{4}{3}ut}$

with u being the rate of substitution. Now you want to make a model, and you chose a flat prior for p. This inherently implies an exponentially decreasing prior for t. It becomes even more problematic if you want to set a flat prior on t. The implied prior on p is strongly dependent on where you cut off the range of t.

In theory, t can be infinite, but when you allow an infinite range, the area under its density function equals infinity as well, so you have to define a truncation point.

So you do. Now when you chose the truncation point sufficiently large, it is not difficult to prove that both ends of the credible interval rise, and at a certain point the true value is not contained in the credible interval any more. Unless you have a very good idea about the prior, Bayesian methods are not guaranteed to be equal to or superior to other methods.

ref: Joseph Felsenstein : Inferring Phylogenies, chapter 18

On a side note, I'm getting sick of that Bayesian/Frequentist quarrel. They're both different frameworks, and neither is the Absolute Truth. The classical examples pro Bayesian methods invariantly come from probability calculation, and not one frequentist will contradict them. The classical argument against Bayesian methods invariantly involve the arbitrary choice of a prior. And sensible priors are definitely possible.

It all boils down to the correct use of either method at the right time. I've seen very few arguments/comparisons where both methods were applied correctly. Assumptions of any method are very much underrated and far too often ignored.

EDIT : to clarify, the problem lies in the fact that the estimate based on p differs from the estimate based on t in the Bayesian framework when working with uninformative priors (which is in a number of cases the only possible solution). This is not true in the ML framework for phylogenetic inference. It is not a matter of a wrong prior, it is inherent to the method.

The problem starts with your sentence :

Examples based on incorrect prior assumptions are not acceptable as they say nothing about the internal consistency of the different approaches.

Yeah well, how do you know your prior is correct?

Take the case of Bayesian inference in phylogeny. The probability of at least one change is related to evolutionary time (branch length t) by the formula

$$P=1-e^{-\frac{4}{3}ut}$$

with u being the rate of substitution. Now you want to make a model, and you chose a flat prior for p. This inherently implies an exponentially decreasing prior for t. It becomes even more problematic if you want to set a flat prior on t. The implied prior on p is strongly dependent on where you cut off the range of t.

In theory, t can be infinite, but when you allow an infinite range, the area under its density function equals infinity as well, so you have to define a truncation point.

So you do. Now when you chose the truncation point sufficiently large, it is not difficult to prove that both ends of the credible interval rise, and at a certain point the true value is not contained in the credible interval any more. Unless you have a very good idea about the prior, Bayesian methods are not guaranteed to be equal to or superior to other methods.

ref: Joseph Felsenstein : Inferring Phylogenies, chapter 18

On a side note, I'm getting sick of that Bayesian/Frequentist quarrel. They're both different frameworks, and neither is the Absolute Truth. The classical examples pro Bayesian methods invariantly come from probability calculation, and not one frequentist will contradict them. The classical argument against Bayesian methods invariantly involve the arbitrary choice of a prior. And sensible priors are definitely possible.

It all boils down to the correct use of either method at the right time. I've seen very few arguments/comparisons where both methods were applied correctly. Assumptions of any method are very much underrated and far too often ignored.

EDIT : to clarify, the problem lies in the fact that the estimate based on p differs from the estimate based on t in the Bayesian framework when working with uninformative priors (which is in a number of cases the only possible solution). This is not true in the ML framework for phylogenetic inference. It is not a matter of a wrong prior, it is inherent to the method.

added 373 characters in body
Source Link
Joris Meys
  • 9k
  • 2
  • 41
  • 47

The problem starts with your sentence :

Examples based on incorrect prior assumptions are not acceptable as they say nothing about the internal consistency of the different approaches.

Yeah well, how do you know your prior is correct?

Take the case of Bayesian inference in phylogeny. The probability of at least one change is related to evolutionary time (branch length t) by the formula

$P=1-e^{-\frac{4}{3}ut}$

with u being the rate of substitution. Now you want to make a model, and you chose a flat prior for p. This inherently implies an exponentially decreasing prior for t. It It becomes even more problematic if you want to set a flat prior on t. The implied prior on p is strongly dependent on where you cut off the range of t. 

In theory, t can be infinite, but when you allow an infinite range, the area under any flatits density function equals infinity as well, so you have to define a truncation point.

So you do. Now when you chose the truncation point sufficiently large, it is not difficult to prove that both ends of the credible interval rise, and at a certain point the true value is not contained in the credible interval any more. Unless you have a very good idea about the prior, Bayesian methods are not guaranteed to be equal to or superior to other methods.

ref: Joseph Felsenstein : Inferring Phylogenies, chapter 18

On a side note, I'm getting sick of that Bayesian/Frequentist quarrel. They're both different frameworks, and neither is the Absolute Truth. The classical examples pro Bayesian methods invariantly come from probability calculation, and not one frequentist will contradict them. The classical argument against Bayesian methods invariantly involve the arbitrary choice of a prior. And sensible priors are definitely possible.

It all boils down to the correct use of either method at the right time. I've seen very few arguments/comparisons where both methods were applied correctly. Assumptions of any method are very much underrated and far too often ignored.

EDIT : to clarify, the problem lies in the fact that the estimate based on p differs from the estimate based on t in the Bayesian framework when working with uninformative priors (which is in a number of cases the only possible solution). This is not true in the ML framework for phylogenetic inference. It is not a matter of a wrong prior, it is inherent to the method.

The problem starts with your sentence :

Examples based on incorrect prior assumptions are not acceptable as they say nothing about the internal consistency of the different approaches.

Yeah well, how do you know your prior is correct?

Take the case of Bayesian inference in phylogeny. The probability of at least one change is related to evolutionary time (branch length t) by the formula

$P=1-e^{-\frac{4}{3}ut}$

with u being the rate of substitution. Now you want to make a model, and you chose a flat prior for p. This inherently implies an exponentially decreasing prior for t. It becomes even more problematic if you want to set a flat prior on t. The implied prior on p is strongly dependent on where you cut off the range of t. In theory, t can be infinite, but when you allow an infinite range, the area under any flat density function equals infinity as well, so you have to define a truncation point.

So you do. Now when you chose the truncation point sufficiently large, it is not difficult to prove that both ends of the credible interval rise, and at a certain point the true value is not contained in the credible interval any more. Unless you have a very good idea about the prior, Bayesian methods are not guaranteed to be equal to or superior to other methods.

ref: Joseph Felsenstein : Inferring Phylogenies, chapter 18

On a side note, I'm getting sick of that Bayesian/Frequentist quarrel. They're both different frameworks, and neither is the Absolute Truth. The classical examples pro Bayesian methods invariantly come from probability calculation, and not one frequentist will contradict them. The classical argument against Bayesian methods invariantly involve the arbitrary choice of a prior. And sensible priors are definitely possible.

It all boils down to the correct use of either method at the right time. I've seen very few arguments/comparisons where both methods were applied correctly. Assumptions of any method are very much underrated and far too often ignored.

The problem starts with your sentence :

Examples based on incorrect prior assumptions are not acceptable as they say nothing about the internal consistency of the different approaches.

Yeah well, how do you know your prior is correct?

Take the case of Bayesian inference in phylogeny. The probability of at least one change is related to evolutionary time (branch length t) by the formula

$P=1-e^{-\frac{4}{3}ut}$

with u being the rate of substitution. Now you want to make a model, and you chose a flat prior for p. This inherently implies an exponentially decreasing prior for t. It becomes even more problematic if you want to set a flat prior on t. The implied prior on p is strongly dependent on where you cut off the range of t. 

In theory, t can be infinite, but when you allow an infinite range, the area under its density function equals infinity as well, so you have to define a truncation point.

So you do. Now when you chose the truncation point sufficiently large, it is not difficult to prove that both ends of the credible interval rise, and at a certain point the true value is not contained in the credible interval any more. Unless you have a very good idea about the prior, Bayesian methods are not guaranteed to be equal to or superior to other methods.

ref: Joseph Felsenstein : Inferring Phylogenies, chapter 18

On a side note, I'm getting sick of that Bayesian/Frequentist quarrel. They're both different frameworks, and neither is the Absolute Truth. The classical examples pro Bayesian methods invariantly come from probability calculation, and not one frequentist will contradict them. The classical argument against Bayesian methods invariantly involve the arbitrary choice of a prior. And sensible priors are definitely possible.

It all boils down to the correct use of either method at the right time. I've seen very few arguments/comparisons where both methods were applied correctly. Assumptions of any method are very much underrated and far too often ignored.

EDIT : to clarify, the problem lies in the fact that the estimate based on p differs from the estimate based on t in the Bayesian framework when working with uninformative priors (which is in a number of cases the only possible solution). This is not true in the ML framework for phylogenetic inference. It is not a matter of a wrong prior, it is inherent to the method.

Source Link
Joris Meys
  • 9k
  • 2
  • 41
  • 47
Loading