I am relatively new to data.table and was hoping to use the fast sub-setting feature to carry out some bootstrapping procedures.
In my example I have two columns of 1 million random normals, and I want to take a sample of some of the rows and calculate the correlation between the two columns. I was hoping for some of the 100x faster speed improvements that were suggested on the data.table webpage...but perhaps I am miss-using data.table...if so, what way should the function be structured to be able to get this speed improvement.
Please see below for my example:
n <- 1e6 set.seed(1) q <- data.frame(a=rnorm(n),b=rnorm(n)) q.dt <- data.table(q) df.samp <- function(){cor(q[sample(seq(n),n*0.01),])[2,1]} dt.samp <- function(){q.dt[sample(seq(n),n*0.01),cor(a,b)]} require(microbenchmark) microbenchmark(median(sapply(seq(100),function(y){df.samp()})), median(sapply(seq(100),function(y){dt.samp()})), times=100) Unit: milliseconds expr min lq median uq max neval median(sapply(seq(100), function(y) { df.samp() })) 1547.5399 1673.1460 1747.0779 1860.3371 2028.6883 100 median(sapply(seq(100), function(y) { dt.samp() })) 583.4724 647.0869 717.7666 764.4481 989.0562 100
cor()on all your samples that is the irreducible time bottleneck.samp <- sample.int(n,n/100); microbenchmark(q[samp,],q.dt[samp])? I'm seeing the data.table subsetting as about twice as fast with that.