0

I have a spark job that runs daily to load data from S3.

These data are composed of thousands of gzip files. However, in some cases, there is one or two corrupted files in S3, and it causes the whole spark_reader.load() task to fail.

An error occurred while calling o112.load. incorrect header check 

Is there a way to just log these corrupted files and not break the loading ?

Current code :

def read_data(spark: SparkSession) -> DataFrame: spark_reader = spark.read.format(json) return spark_reader.load("s3://my_bucket/some_folder/") 
1
  • We need you to show some code to be able to help you Commented Nov 26 at 7:23

1 Answer 1

0

You can try the ignoreCorruptFiles option, documentation.

Sign up to request clarification or add additional context in comments.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.