This document provides an introduction to deep neural networks (DNNs) by a Dr. Liwei Ren. It defines DNNs from both technical and mathematical perspectives. DNNs are composed of three main elements - architecture, activity rule, and learning rule. The architecture determines the network's capability and is typically a directed graph with weights, biases, and activation functions. Gradient descent and backpropagation are commonly used as the learning rule to update weights and minimize error. Universal approximation theorems show that both shallow and deep neural networks can approximate functions, with deep networks potentially being more efficient. Examples of DNN applications include image recognition. Security issues are also briefly mentioned.