This document discusses using Oracle Data Integrator (ODI) for Extract, Transform, Load (ETL) processes involving big data. It demonstrates how ODI can be used to run ETL jobs on Hadoop technologies like Hive, Pig, and Spark. The document shows examples of importing Hive table metadata into the ODI repository and creating physical mappings for Hive, Pig, and Spark transformations. It emphasizes that ODI provides governance, orchestration, and monitoring for big data ETL processes while leveraging native Hadoop technologies.