This document summarizes Wes McKinney's talk on improving Python and Spark performance and interoperability. The talk discusses how Spark DataFrames can appear faster than they really are since Python is only a scripting front-end. It also covers issues like inefficient data movement and interpreter overhead when using Python UDFs. Apache Arrow is presented as a technology to help make things faster by enabling zero-copy data transfers and fast messaging between processes. Work done to improve the speed of DataFrame.toPandas using Arrow is discussed. The talk concludes with areas for future work like performing the Spark SQL to Arrow conversion locally on task executors.