The document summarizes Tomas Mikolov's talk on recurrent neural networks and directions for future research. The key points are: 1) Recurrent networks have seen renewed success since 2010 due to simple tricks like gradient clipping that allow them to be trained more stably. Structurally constrained recurrent networks (SCRNs) provide longer short-term memory than simple RNNs without complex architectures. 2) While RNNs have achieved strong performance on many tasks, they struggle with algorithmic patterns requiring memorization of sequences or counting. Stack augmented RNNs add structured memory to address such limitations. 3) To build truly intelligent machines, we need to focus on developing skills like communication, learning new tasks quickly from few examples