69

I am reading a csv file in Pyspark as follows:

df_raw=spark.read.option("header","true").csv(csv_path) 

However, the data file has quoted fields with embedded commas in them which should not be treated as commas. How can I handle this in Pyspark ? I know pandas can handle this, but can Spark ? The version I am using is Spark 2.0.0.

Here is an example which works in Pandas but fails using Spark:

In [1]: import pandas as pd In [2]: pdf = pd.read_csv('malformed_data.csv') In [3]: sdf=spark.read.format("org.apache.spark.csv").csv('malformed_data.csv',header=True) In [4]: pdf[['col12','col13','col14']] Out[4]: col12 col13 \ 0 32 XIY "W" JK, RE LK SOMETHINGLIKEAPHENOMENON#YOUGOTSOUL~BRINGDANOISE 1 NaN OUTKAST#THROOTS~WUTANG#RUNDMC col14 0 23.0 1 0.0 In [5]: sdf.select("col12","col13",'col14').show() +------------------+--------------------+--------------------+ | col12| col13| col14| +------------------+--------------------+--------------------+ |"32 XIY ""W"" JK| RE LK"|SOMETHINGLIKEAPHE...| | null|OUTKAST#THROOTS~W...| 0.0| +------------------+--------------------+--------------------+ 

The contents of the file :

 col1,col2,col3,col4,col5,col6,col7,col8,col9,col10,col11,col12,col13,col14,col15,col16,col17,col18,col19 80015360210876000,11.22,X,4076710258,,,sxsw,,"32 YIU ""A""",S5,,"32 XIY ""W"" JK, RE LK",SOMETHINGLIKEAPHENOMENON#YOUGOTSOUL~BRINGDANOISE,23.0,cyclingstats,2012-25-19,432,2023-05-17,CODERED 61670000229561918,137.12,U,8234971771,,,woodstock,,,T4,,,OUTKAST#THROOTS~WUTANG#RUNDMC,0.0,runstats,2013-21-22,1333,2019-11-23,CODEBLUE 
0

4 Answers 4

117

I noticed that your problematic line has escaping that uses double quotes themselves:

"32 XIY ""W"" JK, RE LK"

which should be interpreter just as

32 XIY "W" JK, RE LK

As described in RFC-4180, page 2 -

  1. If double-quotes are used to enclose fields, then a double-quote appearing inside a field must be escaped by preceding it with another double quote

That's what Excel does, for example, by default.

Although in Spark (as of Spark 2.1), escaping is done by default through non-RFC way, using backslah (\). To fix this you have to explicitly tell Spark to use doublequote to use as an escape character:

.option("quote", "\"") .option("escape", "\"") 

This may explain that a comma character wasn't interpreted correctly as it was inside a quoted column.

Options for Spark csv format are not documented well on Apache Spark site, but here's a bit older documentation which I still find useful quite often:

https://github.com/databricks/spark-csv

Update Aug 2018: Spark 3.0 might change this behavior to be RFC-compliant. See SPARK-22236 for details.

Sign up to request clarification or add additional context in comments.

7 Comments

shouldn't the first argument to option be double-quoted instead of single-quoted?
With spark 2.3 this give following error: quote cannot be more than one character
It's just one character - " (one double quote character, not two single quotes). Haven't tested, should be the same in Spark 2.3
Thanks Tagar. I'm using Spark 3.3.1 and they are still using \
|
54

For anyone doing this in Scala: Tagar's answer nearly worked for me (thank you!); all I had to do was escape the double quote when setting my option param:

.option("quote", "\"") .option("escape", "\"") 

I'm using Spark 2.3, so I can confirm Tagar's solution still seems to work the same under the new release.

1 Comment

The thing is that correctly created comma separated files (CSV) should include quoted and escaped columns which contain separator inside its content. Make sure you open CSV in text editors (not Excel) to verify if this is the case.
18

For anyone who is still wondering if their parse is still not working after using Tagar's solution.

Pyspark 3.1.2 .option("quote", "\"") is the default so this is not necessary however in my case I have data with multiple lines and so spark was unable to auto detect \n in a single data point and at the end of every row so using .option("multiline", True) solved my issue along with .option('escape', "\"") So generally its better to use the multiline option by default

3 Comments

I wonder if this has performance implications, since Spark then cannot split multiline values across the workers?
I am not sure if spark can split multiline values across the workers but if you are sure your data doesn't have multiline may be you need not but in my case I am dealing with text data so I can never be sure if my data contains '\n'.
Indeed it has as this nice article shows kokes.github.io/blog/2018/05/19/spark-sane-csv-processing.html.
1

Delimiter(comma) specified inside quotes will be ignored by default. Spark SQL does have inbuilt CSV reader in Spark 2.0.

df = session.read .option("header", "true") .csv("csv/file/path") 

more about CSV reader here - .

1 Comment

This doesn't seem to work. See the example I provided above.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.