Skip to main content
edited title
Link
User1865345
  • 11.9k
  • 13
  • 27
  • 42

If a statisticsstatistic doesn't reveal a significance, do I have to calculate power for it?

replaced http://stats.stackexchange.com/ with https://stats.stackexchange.com/
Source Link

Following the design and data described in this questionthis question, I did a simple one-way within-subjects repeated-measures (RM) ANOVA and found some significant p-values. I then applied non-orthogonal post-hoc Tukey's HSD tests, and when I got significant results I applied Holm-Bonferroni (1979) correction. Whenever some p-values survived the FWER correction, I calculated 95% CIs and mean for the associated pairwise comparisons.

My question is: If I don't observe a significant result at any of the above steps, do I have to carry out a power analysis for the RM ANOVA, apply Tukey's HSD test or Holm-Bonferroni adjustments, or do I simply report results from the RM ANOVA without doing the power analysis?

The problem is that I'm starting to immerse in biostatistics only after my experiments, and unfortunately I didn't run a power analysis beforehand.

Following the design and data described in this question, I did a simple one-way within-subjects repeated-measures (RM) ANOVA and found some significant p-values. I then applied non-orthogonal post-hoc Tukey's HSD tests, and when I got significant results I applied Holm-Bonferroni (1979) correction. Whenever some p-values survived the FWER correction, I calculated 95% CIs and mean for the associated pairwise comparisons.

My question is: If I don't observe a significant result at any of the above steps, do I have to carry out a power analysis for the RM ANOVA, apply Tukey's HSD test or Holm-Bonferroni adjustments, or do I simply report results from the RM ANOVA without doing the power analysis?

The problem is that I'm starting to immerse in biostatistics only after my experiments, and unfortunately I didn't run a power analysis beforehand.

Following the design and data described in this question, I did a simple one-way within-subjects repeated-measures (RM) ANOVA and found some significant p-values. I then applied non-orthogonal post-hoc Tukey's HSD tests, and when I got significant results I applied Holm-Bonferroni (1979) correction. Whenever some p-values survived the FWER correction, I calculated 95% CIs and mean for the associated pairwise comparisons.

My question is: If I don't observe a significant result at any of the above steps, do I have to carry out a power analysis for the RM ANOVA, apply Tukey's HSD test or Holm-Bonferroni adjustments, or do I simply report results from the RM ANOVA without doing the power analysis?

The problem is that I'm starting to immerse in biostatistics only after my experiments, and unfortunately I didn't run a power analysis beforehand.

added 23 characters in body
Source Link
abc
  • 1.9k
  • 3
  • 19
  • 33

Following the design and data described in this question, I did a twosimple one-way within-subjects repeated-measures (RM) ANOVA and found some significant p-values. I then applied non-orthogonal post-hoc Tukey's HSD tests, and when I got significant results I applied Holm-Bonferroni (1979) correction. Whenever some p-values survived the FWER correction, I calculated 95% CIs and mean for the associated pairwise comparisons.

My question is: If I don't observe a significant result at any of the above steps, do I have to carry out a power analysis for the RM ANOVA, apply Tukey's HSD test or Holm-Bonferroni adjustments, or do I simply report results from the RM ANOVA without doing the power analysis?

The problem is that I'm starting to immerse in biostatistics only after my experiments, and unfortunately I didn't run a power analysis beforehand.

Following the design and data described in this question, I did a two-way repeated-measures (RM) ANOVA and found some significant p-values. I then applied non-orthogonal post-hoc Tukey's HSD tests, and when I got significant results I applied Holm-Bonferroni (1979) correction. Whenever some p-values survived the FWER correction, I calculated 95% CIs and mean for the associated pairwise comparisons.

My question is: If I don't observe a significant result at any of the above steps, do I have to carry out a power analysis for the RM ANOVA, apply Tukey's HSD test or Holm-Bonferroni adjustments, or do I simply report results from the RM ANOVA without doing the power analysis?

The problem is that I'm starting to immerse in biostatistics only after my experiments, and unfortunately I didn't run a power analysis beforehand.

Following the design and data described in this question, I did a simple one-way within-subjects repeated-measures (RM) ANOVA and found some significant p-values. I then applied non-orthogonal post-hoc Tukey's HSD tests, and when I got significant results I applied Holm-Bonferroni (1979) correction. Whenever some p-values survived the FWER correction, I calculated 95% CIs and mean for the associated pairwise comparisons.

My question is: If I don't observe a significant result at any of the above steps, do I have to carry out a power analysis for the RM ANOVA, apply Tukey's HSD test or Holm-Bonferroni adjustments, or do I simply report results from the RM ANOVA without doing the power analysis?

The problem is that I'm starting to immerse in biostatistics only after my experiments, and unfortunately I didn't run a power analysis beforehand.

Tweeted twitter.com/#!/StackStats/status/82876551590457345
edited title
Link
abc
  • 1.9k
  • 3
  • 19
  • 33
Loading
edited title
Link
abc
  • 1.9k
  • 3
  • 19
  • 33
Loading
added 33 characters in body
Source Link
abc
  • 1.9k
  • 3
  • 19
  • 33
Loading
spelling + formatting
Source Link
chl
  • 55.4k
  • 23
  • 235
  • 411
Loading
Source Link
abc
  • 1.9k
  • 3
  • 19
  • 33
Loading