I have a spark job that runs daily to load data from S3.
These data are composed of thousands of gzip files. However, in some cases, there is one or two corrupted files in S3, and it causes the whole spark_reader.load() task to fail.
An error occurred while calling o112.load. incorrect header check Is there a way to just log these corrupted files and not break the loading ?
Current code :
def read_data(spark: SparkSession) -> DataFrame: spark_reader = spark.read.format(json) return spark_reader.load("s3://my_bucket/some_folder/")