1
$\begingroup$

In my linear mixed effects model, I had a significant interaction, I followed up the interaction with orthogonal contrast codes to see where the difference was.

How do I report my findings, other than a p-value? Do I need to report a t or F statistic as well?

 Type III Analysis of Variance Table with Satterthwaite's method Sum Sq Mean Sq NumDF DenDF F value Pr(>F) group 3566 1783 2 90.01 1.0697 0.34743 session 30588 30588 1 629.03 18.3504 2.125e-05 *** trialtype 3004359 1001453 3 629.03 600.7995 < 2.2e-16 *** group:session 13907 6953 2 629.03 4.1715 0.01586 * group:trialtype 6066 1011 6 629.03 0.6065 0.72522 session:trialtype 11775 3925 3 629.03 2.3547 0.07094 . group:session:trialtype 6154 1026 6 629.03 0.6153 0.71815 

For the contrast code, I left the last two as all 0s because I was only interested in compared Group 1 pre-post (1-4) Group 2 pre-post (2-5) and Group 3 pre-post (3-6). Please correct if this is wrong.

contrastmatrix<-cbind(c(1,0,0,-1,0,0),c(0,1,0,0,-1,0),c(0,0,1,0,0,-1),c(0,0,0,0,0,0), c(0,0,0,0,0,0)) contrasts(pairwisegp)<-contrastmatrix summary.lm(aov(rt~pairwisegp)) Coefficients: (2 not defined because of singularities) Estimate Std. Error t value Pr(>|t|) (Intercept) 302.910 3.588 84.429 <2e-16 *** pairwisegp1 5.431 6.210 0.875 0.3821 pairwisegp2 2.016 6.223 0.324 0.7460 pairwisegp3 12.373 6.210 1.993 0.0467 * pairwisegp4 NA NA NA NA pairwisegp5 NA NA NA NA 
$\endgroup$
1
  • 1
    $\begingroup$ It would help a lot to see a copy of the results summary, to see the evidence for the interaction and to clarify what you mean by "I followed up the interaction with orthogonal contrast codes to see where the difference was." Please edit your question to provide that information. $\endgroup$ Commented Aug 13, 2019 at 19:48

1 Answer 1

1
$\begingroup$

For the first part of your question, reporting p values by themselves provides little useful information. This thread is a good introduction to the limits, even the dangers, of relying on p values.

A reader is going to want to know details about the size of the sample on which the results are based, the magnitudes of individual effects, of differences among treatments, and a measure of potential errors in those estimates. The point estimates and associated confidence intervals of effects or differences among treatments are thus most important. See this thread for related discussion.

For the second part of your question, it seems that you might have moved from a mixed model in your first code block (assuming this is related to your other post, which shows a random effect for subjects) to a fixed model in the second code block. I guess this because you seem to be using aov() with contrasts instead of the contrast tests provided by the lmerTest package for mixed models (which evidently provided your first code block) and you only have one barely significant result among your contrasts in the second code block while the mixed-model result in the first code block showed p = 0.016 for the interaction term.

This type of behavior occurs between paired and independent-groups t tests when there are substantial differences in baseline values among individuals but treatment-related differences from baseline are similar. Just like a paired t test is more powerful than an independent-groups t test in that scenario, if my guess is correct then your second code block has thrown away all the good that you did by accounting for the individual baseline differences with your mixed model. In that case you need to go back and evaluate your contrasts in a way that takes the mixed-model results into account. I understand that the lmerTest package provides such possibilities, but have little experience with that package.

$\endgroup$

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.