Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

9
  • $\begingroup$ 1/5 Thanks, @EdM. I agree p-values shouldn't be the be-all and end-all, but we’re still a ways from convincing the scientific community. Also, note I stumbled upon this finding when estimating ORs (treating the outcome as binary instead of as proportions) as measures of effect size to report despite a lack of significance (this same “phenomenon” occurred in logistic models). I would have not found this simple effect had I not gone on to estimate ORs (I got the p-values for the four comparisons while estimating the ORs), and it remains that I want to know when/why this happens. $\endgroup$ Commented Aug 20, 2020 at 20:11
  • $\begingroup$ 2/5 As your quote says: I am interested in estimating parameters – the ORs (logistic regression), the estimated probabilities of the four groupings here. I am also interested in the magnitude of the OR and the difference in these probabilities. It’s actually all of this (i.e., looking at my results to see if they made sense, the direction and magnitudes of the differences, etc.) that led me to this – what I would consider – anomaly. But the reality is the journal also wants p-values, not just estimates, etc. $\endgroup$ Commented Aug 20, 2020 at 20:11
  • $\begingroup$ 3/5 I was wondering if power was the issue, but n = 354 isn’t “that” small (although there is correlation (each subject contributes six probabilities, one for each image)). I appreciate your comment about the magnitudes of the p-values not necessarily being comparable: Maybe there is a general lack of power, and p for the simple effect and interaction would both be smaller with more data? $\endgroup$ Commented Aug 20, 2020 at 20:11
  • $\begingroup$ 4/5 What I can’t understand is how a simple effect with p = 0.007 wouldn’t “drive down” the p-value for the interaction to a “similar” level. It seems they should be more well-aligned. But maybe this is a logical fallacy, or my definition of what might be considered “well-aligned” is inappropriate/unreasonable. $\endgroup$ Commented Aug 20, 2020 at 20:11
  • 1
    $\begingroup$ @Meg the "simple effect" of Post-Pre for women is still reliable even if the interaction term isn't significant, as that "simple effect" properly takes into account the covariances among the coefficient estimates. I'd say to focus on that. (The fixed-effect coefficient covariance matrix might be informative in this context.) Someone who just throws away a p = 0.09 isn't thinking; that might be something worthy of further study even if you can't publish it as "significant." Inform your data analysis with your knowledge of the subject matter and an open mind. $\endgroup$ Commented Aug 20, 2020 at 21:27