The document discusses mapping algorithm graphs to parallel processor architectures. It explains that each processor is faster at accessing local memory than nonlocal memory, so algorithms should manipulate local data as much as possible. The distribution of data structures determines which processor performs each operation. Algorithm graphs and processor organizations can be represented as graphs that must be properly mapped for good performance. The document provides definitions and theorems for embedding algorithm graphs into processor topologies like rings, meshes, and hypercubes while minimizing the dilation, or maximum distance between mapped vertices. It specifically discusses embedding complete binary trees, binomial trees, and rings. Gray codes are introduced as a way to embed rings into hypercubes with dilation of 1.