1
$\begingroup$

I'm new to circular stats and have been playing with and . I'm not sure how to interpret the output. I have consulted the CrossValidated database, and the papers/tutorials associated with the mentioned packages. However, my limited mathematics background prevents me from generalizing to my case.

I have a circular dependent variable and two circular independent variables. My goal is to compare how strongly the two independent variables predict the dependent variable:

$$outangle = b1*angle1 + b2*angle2$$

I understand that for the mentioned packages I should include two linear predictors per circular predictor:

$$outangle = b1*sin(angle1) + b2*cos(angle1) + b3*sin(angle2) + b4*cos(angle2)$$

The models run, but I don't know how to interpret their output (since there are two predictors per variable). I would like to be able to say something like "angle 1 contributes the most to the outcome angle, while angle 2 also contributes but less strongly". In other words: how can I convert the output from the two linear predictors to represent a single coefficient for the circular variable? The outputs from the two packages are also quite different. I understand that they are using different methods, i.e. bpnreg uses the projected normal distribution, but ultimately I would expect them to produce the same qualitative answer.

Simplified example code and output are pasted below:

library(bpnreg) library(circglmbayes) # test data angles1 <- c(1, 1.1, 0.9, 2, 1.5, 2.5, 3.0, 0.5) angles2 <- c(1.2, 0.7, 1.0, 2.3, 1.4, 2.8, 0.1, 0.2) outangle <- c(1.1, 1, 0.8, 2.1, 1.4, 2.7, 3.1, 0.4) # find linear components sori1 <- sin(angles1) cori1 <- cos(angles1) sori2 <- sin(angles2) cori2 <- cos(angles2) # make dataframe df <- data.frame(outangle, sori1, cori1, sori2, cori2) # run bpnr regression circfit <- bpnr(outangle ~ 1 + sori1 + cori1 + sori2 + cori2, df) circfit # run circGLM regression circfit2 <- circGLM(outangle ~ 1 + sori1 + cori1 + sori2 + cori2, df) circfit2 

(truncated) output for bpnr:

Linear Coefficients Component I: mean mode sd LB HPD UB HPD (Intercept) -31.38534 -34.43952 16.939204 -63.276156 -1.988348 sori1 53.25621 77.42373 29.152543 -2.583563 96.693508 cori1 12.73618 12.34990 5.416473 4.027589 24.099637 sori2 -21.23710 -27.35492 16.259473 -48.074359 9.390253 cori2 21.34205 28.25890 11.224285 -2.308050 36.577529 Component II: mean mode sd LB HPD UB HPD (Intercept) -33.16929 -49.09227 19.125987 -58.474295 1.0513257 sori1 108.48108 156.23939 56.547787 5.689475 182.3388648 cori1 -12.65639 -17.02416 7.716536 -24.791198 0.5974747 sori2 -43.46557 -68.49125 29.332958 -83.259452 8.4257935 cori2 10.43962 16.47495 6.794084 -1.659991 20.2801103 Circular Coefficients Continuous variables: mean ax mode ax sd ax LB ax UB ax sori1 0.3582156 0.3453063 0.2579013 0.1491464 0.6097201 cori1 -0.3463878 0.4257769 0.9575427 -2.0946958 1.1603828 sori2 -0.6953270 -0.8023903 1.6901613 -2.0029269 1.5081406 cori2 1.6737354 1.7497827 1.2241280 -1.0348414 3.3354418 

(truncated) output for circGLM:

Coefficients: Estimate SD LB UB Intercept 1.604 0.061 1.493 1.712 Kappa 56.128 47.798 2.052 168.050 sori1 -0.150 0.099 -0.359 0.039 cori1 -0.522 0.092 -0.680 -0.344 sori2 0.108 0.077 -0.049 0.250 cori2 0.016 0.072 -0.129 0.189 
$\endgroup$

1 Answer 1

1
$\begingroup$

The short answer is that there is no way to compute a singular coefficient from a circular predictor, so that if we want to quantify the effect of a single circular predictor, we have to use something such as model fit or information criteria.

Why we have this problem

Interpreting the cos and sin coefficients separately will not work, as that would mean that if we rotate the predictor, which should not change the strength of its effect, we would get different credible intervals for both the cos and sin predictor components. Also, in almost all cases, the model with only one of the two elements of the predictor (cos and sin) does not make sense and should not be interpreted. So, we should only add both or add neither to the model.

Finally, I should try to prevent confusion: the bpnreg model doesn't only have a cos and sin component of the predictor; it also splits the circular outcome into Component I (cos) and Component II (sin).

How to compare models with model fit or information criteria

The approach (which I will simplify for clarity to have only one circular predictor instead of the two you have) could be something like this:

  • Run the model without the circular predictor, call this M1.
  • Run the model with the circular predictor, call this M2.
  • Compare the two models M1 and M2. We can do this in several ways, in increasing complexity:
    • Compare the DIC from one model to the next, and prefer the model with the lowest BIC. This is given by default in both packages.
    • Same as above, but with WAIC. This is given by both packages as well, and is generally a little bit better than the DIC.
    • Compute the Marginal Likelihood for each model, and calculate the associated Bayes Factor which is the ratio between Marginal Likelihoods. This is an option in the package circbayes, but that package is still somewhat experimental. You can also do it yourself with the bridgesampling package, but it is more complicated. This is the best option, but the most complex one to implement.
    • If the end goal is active prediction, cross-validation is an option as well.

As you have two circular predictors, the process is the same, but there are four models to compare.

(Full disclosure: I am the author of circglmbayes.)

$\endgroup$
4
  • 1
    $\begingroup$ Thanks much for your response! If I understand correctly, the model comparison approach would tell me whether the added predictors improve the model, i.e. have a meaningful relationship with the independent variable. However, this doesn't allow me to compare the predictors, i.e. which is most strongly predicting the independent? $\endgroup$ Commented May 23, 2021 at 15:03
  • 1
    $\begingroup$ Also, what IS then the interpretation of the individual (linear) predictor coefficients? Am I right in assuming that for instance, sin(angle) predicts the influence of the sine component of angle on the sine component of the outcome (etc)? $\endgroup$ Commented May 23, 2021 at 15:04
  • $\begingroup$ On the first question, you can interpret the strength of the effect as the amount of improvement in model fit. If the one predictor increases the model fit more than the other, it is a stronger predictor. However, there is no intuitive scale: we can not say that one predictor is twice as strong as another. I do not know any way around this. $\endgroup$ Commented May 23, 2021 at 19:16
  • $\begingroup$ For the second question: I would recommend against trying to get a direct interpretation here, because it is likely too contrived to intuitively understand. Regardless: The real interpretation is that the parameter is the result of a transformation of the model where circular predictor phi is predicted by a * cos(phi + b), which we rewrite to (something like) a * cos(b) * cos(phi) + a * sin(b) * sin(phi), so that if we calculate cos(phi) and sin(phi) we can estimate a * cos(b) and a * sin(b) by linear regression. So the interpretation of the parameters is they are a * cos(b) and a * sin(b). $\endgroup$ Commented May 23, 2021 at 19:25

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.