This document introduces Apache Spark. It discusses MapReduce and its limitations in processing large datasets. Spark was developed to address these limitations by enabling fast sharing of data across clusters using resilient distributed datasets (RDDs). RDDs allow transformations like map and filter to be applied lazily and support operations like join and groupByKey. This provides benefits for iterative and interactive queries compared to MapReduce.