The document discusses large scale distributed deep networks and how distributed computing can be applied. It describes Google's DistBelief framework which uses two approaches - parallelization and model replication. Parallelization splits the neural network across multiple machines to speed up training for large networks. Model replication involves creating multiple copies of the neural network and processing them asynchronously on different machines and data shards, using techniques like Downpour SGD. Distributed computing is necessary to train very large neural networks with millions of parameters and can provide significant speedups over single machine training. However, it introduces challenges around network overhead and limitations on connectivity between network units.