15

I am trying to read a single column of a CSV file to R as quickly as possible. I am hoping to cut down on standard methods in terms of the time it takes to get the column into RAM by a factor of 10.

What is my motivation? I have two files; one called Main.csv which is 300000 rows and 500 columns, and one called Second.csv which is 300000 rows and 5 columns. If I system.time() the command read.csv("Second.csv"), it will take 2.2 seconds. Now if I use either of the two methods below to read the first column of Main.csv (which is 20% the size of Second.csv since it is 1 column instead of 5), it will take over 40 seconds. This is the same amount of time as it takes to read the whole 600 Megabyte file -- clearly unacceptable.

  • Method 1

    colClasses <- rep('NULL',500) colClasses[1] <- NA system.time( read.csv("Main.csv",colClasses=colClasses) ) # 40+ seconds, unacceptable 
  • Method 2

     read.table(pipe("cut -f1 Main.csv")) #40+ seconds, unacceptable 

How to reduce this time? I am hoping for an R solution.

12
  • You can load your data into database and select only required column or use HDF5 files instead of csv. Commented Nov 2, 2013 at 15:08
  • @zero323 I need something that can be io with all of: Python, Java, R. Commented Nov 2, 2013 at 15:10
  • require(data.table); fread( "path/to/file/Main.csv" ) will give you an instant speed improvement. Commented Nov 2, 2013 at 15:10
  • My rather old POC package might be interesting here that provides a way to write a data.frame in a special binary format that can be used later for reading only a few variables at a time. Basically it's a wrapper around save/readRDS and writing the columns to separate files etc. More details: stackoverflow.com/questions/4756989/… Commented Nov 2, 2013 at 15:13
  • 3
    Is your csv file really comma-separated? I would think that scan(pipe("cut -f1 -d, Main.csv")) might be worth a try. Commented Nov 2, 2013 at 15:35

2 Answers 2

14

I would suggest

scan(pipe("cut -f1 -d, Main.csv")) 

This differs from the original proposal (read.table(pipe("cut -f1 Main.csv"))) in a couple of different ways:

  • since the file is comma-separated and cut assumes tab-separation by default, you need to specify d, to specify comma-separation
  • scan() is much faster than read.table for simple/unstructured data reads.

According to the comments by the OP this takes about 4 rather than 40+ seconds.

Sign up to request clarification or add additional context in comments.

5 Comments

It is really amazing to see you can nest with Linux command lines before read the file. I can even put my Python cleaner in the pipe command to clean the data before I read the raw file! I am wondering is it possible to read a table from stdin line by line(each row is a line) using scan efficiently?
Absolutely brilliant.
@B.Mr.W.: I'm afraid you're not going to be do very much better (I saw your question elsewhere, but your constraints are very strong: it's very hard to think of a way to read line-by-line in R without lots of overhead.) I don't think scan is going to be faster than readLines, but why don't you try it and see how it goes?
@BenBolker Actually you can use fread directly with a system command, so this... fread( "cut -f1 -d, Main.csv" ) could be even quicker?
maybe, but scan() really doesn't have very much overhead (in contrast to read.table())
11

There is a speed comparison of methods to read large CSV files in this blog. fread is the fastest by an order of magnitude.

As mentioned in the comments above, you can use the select parameter to select which columns to read - so:

fread("main.csv",sep = ",", select = c("f1") ) 

will work

1 Comment

can you select which rows to read in? i.e., select rows by columns’ conditions? an fread equivalent of SELECT col_1, col_2 FROM file WHERE col_3 > 30.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.