Sqoop on Spark provides a way to run Sqoop jobs using Apache Spark for parallel data ingestion. It allows Sqoop jobs to leverage Spark's speed and growing community. The key aspects covered are: - Sqoop jobs can be created and executed on Spark by initializing a Spark context and wrapping Sqoop and Spark initialization. - Data is partitioned and extracted in parallel using Spark RDDs and map transformations calling Sqoop connector APIs. - Loading also uses Spark RDDs and map transformations to parallelly load data calling connector load APIs. - Microbenchmarks show Spark-based ingestion can be significantly faster than traditional MapReduce-based Sqoop for large datasets