Open In App

Identify corrupted records in a dataset using pyspark

Last Updated : 07 Apr, 2025
Comments
Improve
Suggest changes
Like Article
Like
Report

There can be datasets that may contain corrupt records. Those records don't follow data-specific rules that are followed by correct records e.g., a corrupt record may have been delimited with a pipe ("|") character but the rest of other records are delimited by comma (","), and it is mentioned to read data from that file with comma separator.

This article demonstrates three different ways to identify corrupt records and get rid of corrupt records:

  • PERMISSIVE
  • DROPMALFORMED
  • FAILFAST

Let's discuss all the modes one by one with examples but before that make sure to set up the virtual environment. We are going to use google colab, here's how to set it up:

Step 1: Install Java and Spark in your Colab environment:

Python
!apt-get install openjdk-11-jdk-headless -qq > /dev/null
!wget -q https://downloads.apache.org/spark/spark-3.4.1/spark-3.4.1-bin-hadoop3.tgz
!tar xf spark-3.4.1-bin-hadoop3.tgz
!pip install -q findspark

Step 2: Set up environment variables:

Python
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-11-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-3.4.1-bin-hadoop3"

Step 3: Initialize Spark:

Python
import findspark
findspark.init()

from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local[*]").appName("CorruptedRecords").getOrCreate()

Now that we have set up the colab environment, let's explore the working of all three modes:

To download the csv file used in this article, click here.

PERMISSIVE

It is the default mode. In “Permissive” Mode, “NULLs” are inserted for the Fields that could Not be Parsed correctly. If you want to retain bad records in dataframe , Use "columnNameOfCorruptRecord" option to identify bad records.

Example

This PySpark code reads a CSV file, identifies corrupted records, and counts the total number of records. It sets a schema for the data, reads the CSV with specified options (including handling corrupted records), filters and displays the corrupted records, and provides the total record count.

Python
from pyspark.sql.functions import col

customers_schema = '''
customer_id INT,
customer_fname STRING,
customer_lname STRING,
customer_email STRING,
customer_password STRING,
customer_street STRING,
customer_city STRING,
customer_state STRING,
customer_zipcode INT,
_corrupt_record STRING
'''

customers_df = spark.read.schema(customers_schema).format("csv").options(
    header=True,
    delimiter="|",
    mode="PERMISSIVE",
    columnNameOfCorruptRecord="_corrupt_record"
).load("/content/corrupted_customer_details.csv")

customers_df.filter(col("_corrupt_record").isNotNull()).show()
print(f"Total number of records while reading in PERMISSIVE mode: {customers_df.count()}")

In the above code "_corrupt_record" column is used to store the corrupted records.

Output:

pyspark-1
Permissive Mode

DROPMALFORMED

This mode is used to drop corrupted records while trying to read from a given dataset.

Example

This PySpark code reads a CSV file and drops any malformed or corrupted records. It sets a schema for the data, reads the CSV with specified options (including dropping malformed records), displays the cleaned dataset, and provides the total record count.

Python
from pyspark.sql.functions import col

customers_schema = '''
customer_id INT,
customer_fname STRING,
customer_lname STRING,
customer_email STRING,
customer_password STRING,
customer_street STRING,
customer_city STRING,
customer_state STRING,
customer_zipcode INT,
_corrupt_record STRING
'''

customers_df = spark.read.schema(customers_schema).format("csv").options(
    header=True,
    delimiter=",",  # Ensure this matches your CSV
    mode="DROPMALFORMED"  # Drop corrupt rows
).load("/content/corrupted_customer_details.csv")

customers_df.show()
print(f"Total number of records after DROPMALFORMED mode: {customers_df.count()}")

In the output you will find 9 records. But 'DROPMALFORMED' is not going to change total number of records in 'customers_df'.

Output:

pyspark-2
Dropmalformed Mode

FAILFAST

This mode will throw error if malformed records are detected while trying to read from a given dataset.

Example

This PySpark code reads a CSV file in "FAILFAST" mode, which means it will fail and raise an exception if it encounters any malformed records that do not adhere to the specified schema. It sets a schema for the data, reads the CSV with the specified options, displays the dataset, and provides the total record count. If any malformed records are encountered, it will raise an exception and print the error message.

Python
from pyspark.sql import SparkSession
from pyspark.sql.functions import col
import getpass

username = getpass.getuser()
spark = SparkSession.builder.appName(
              "Read data in FAILFAST mode").getOrCreate()
customers_schema = '''
customer_id int,
customer_fname string,
customer_lname string,
customer_email string,
customer_password string,
customer_street string,
customer_city string,
customer_state string,
customer_zipcode int
'''

try:
    customers_df = spark.read.schema(
      customers_schema).format("csv").options(header = True,
      delimiter = "|", mode = "FAILFAST").load(f"C:\\Users\\{
        username}\\Desktop\\PYSPARK\\source\\corrupted_customer_details.csv")
    customers_df.show()
    print(f"Total number of records while reading in FAILFAST mode : {
                                                customers_df.count()}")
except Exception as e:
    print(e)

Since, there is one corrupt record in the dataset therefore it is going to raise exception. Advantage of FAILFAST mode is it will not allow to proceed with working on a dataset if it contains corrupted records.

Output:

pyspark-3
Failfast Mode

To download the jupyter notebook for the entire code, click here.


Next Article
Practice Tags :

Similar Reads

  翻译: