This document summarizes Holden Karau's presentation on powering TensorFlow with big data using Apache Beam, Apache Spark, and Apache Flink. The presentation covers why deep learning requires large datasets for training, how to prepare features from big data for TensorFlow using TensorFlow Transform, and how TensorFlow Transform can run on Apache Beam and integrate feature preparation into model serving. It also discusses challenges in integrating Python and big data systems beyond the Java Virtual Machine and efforts to improve cross-language interoperability.