I have been tinkering with power simulations recently and I have the following code:
library(MASS) library(Matrix) simdat <- data.frame(mmm = rep(rep(factor(1:2, labels=c("m1", "m2")), each = 2), times = 2800), ttt = rep(factor(1:2, labels = c("t1", "t2")), times = 5600), sss = rep(factor(1:70), each = 160), iii = rep(rep(factor(1:40), each = 4), times = 70)) beta <- c(1, 2) X1 <- model.matrix(~ mmm, data = simdat) Z1 <- model.matrix(~ ttt, data = simdat) X1 and Z1 are 11200x2 matrices. With the help of Stackoverflow I managed to make my calculations a lot more efficient than they were before:
funab <- function(){ ran_sub <- mvrnorm(70, mu = c(0,0), Sigma = matrix(c(10, 3, 3, 2), ncol = 2)) ran_ite <- mvrnorm(40, mu = c(0,0), Sigma = matrix(c(10, 3, 3, 2), ncol = 2)) Mb <- as.vector(X1 %*% beta) M1 <- rowSums(Z1 * ran_sub[rep(1:70, each = 160),]) M2 <- rowSums(Z1 * ran_ite[rep(rep(1:40, each = 4), times = 70),]) Mout <- Mb + M1 + M2 Y <- as.vector(Mout) + rnorm(length(Mout), mean = 0 , sd = 0.27) } Y will then be a vector of length 11200. I then replicate this function a lot (say 1000 times):
sim <- replicate(n = 1000, expr = funab()}, simplify = FALSE) sim will be a 11200x1000 list. Given that I want to do this a lot more and possibly include more code into funab() I wonder if it is advisable to use sparse matrices for X1 and Z1 in the calculations in funab() as it is now?
microbenchmark? You can use it to compare performance across functions, aka. benchmarking. Simply,install.packages(c("microbenchmark"), dependencies = TRUE),require(microbenchmark)andexample(microbenchmark), you know the drill. I've usedmicrobenchmarkin this SO answer.