Timeline for error received when convert a pandas dataframe to spark dataframe
Current License: CC BY-SA 3.0
4 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Jan 19, 2016 at 21:07 | comment | added | Sergey Bushmanov | @b4me You may think about accepting the solution to your earlier problem and posting new one as a new question.... | |
| Jan 19, 2016 at 20:54 | comment | added | b4me | the previous error was gone, but i got a type error. wondering if i need to solve each column type specifically? thank you >>> df = sqlContext.createDataFrame(pdf) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/spark/spark-hadoop/python/pyspark/sql/context.py", line 406, in createDataFrame rdd, schema = self._createFromLocal(data, schema) File "/opt/spark/spark-hadoop/python/pyspark/sql/context.py", line 322, in _createFromLocal ... TypeError: Can not merge type <class 'pyspark.sql.types.DoubleType'> and <class 'pyspark.sql.types.StringType'> | |
| Jan 15, 2016 at 20:56 | history | edited | Sergey Bushmanov | CC BY-SA 3.0 | deleted 38 characters in body |
| Jan 15, 2016 at 20:41 | history | answered | Sergey Bushmanov | CC BY-SA 3.0 |