Video Book on Deep Learning
I am happy to present a video book on deep learning. Thanks for all the email messages and suggestions. It helped me in selecting appropriate topics. Based on my knowledge, and understanding I tried maximum justification with the contents. I have used my free-time and weekends to prepare this. Hope you will find it useful. As, scientific development is an endless process, so I will keep updating it. Clicking on the link will drive you to the YouTube page for related content.
Contents.
1. Deep Learning Basics.
- Gradient Descent & Batch Gradient Descent
- Stochastic Gradient Descent & Mini Batch Gradient Descent
- Momentum and RMSProp
- ADAM (Adaptive Moment Estimation)
2. Loss Functions in Deep Learning
- Loss Functions (Deep Learning) – Part 1.1 (Contains: 1. Entropy 2. Cross-Entropy)
- Loss Functions (Deep Learning) – Part 1.2 (Contains: 1. Categorical Cross Entropy Loss, 2. Softmax Activation, 3. Derivation)
- Loss Functions (Deep Learning) – Part 1.3 (Contains: Binary Cross Entropy Loss - also known as Sigmoid Cross Entropy Loss)
3. Deep Neural Networks.
- Deep Learning using Deep Neural Networks Part-1 (Contains: The major coverage are: (1) Overview of Deep Learning, (2) Deep Neural Networks, (3) Network Structure used in Deep Neural Network, and (4) Functioning of Deep Neural Networks.)
- Deep Learning using Deep Neural Networks Part-2 (Contains: (1) Forward Pass and (2) Total Error Calculation phases)
- Deep Learning using Deep Neural Networks Part- 3 (Contains: (1) Back-propagation, (2) test phase of the supervised learning using Deep Neural Network and (3) performance related discussion.)
4. CNN (Convolutional Neural Networks).
- Convolutional Neural Network Made Easy Part-1 (Contains: 1. Basics of CNN, 2. Convolution Operation., 3. Pooling Operation., 4. Calculating I/O size, 5. Some Insight about Convolution and Pooling.)
- Convolutional Neural Network Made Easy Part-2 (Contains: 1. Basic of Padding in Convolutional Neural Network, 2. The Padding I/O Size Computation., 3. Some Insight about Padding.)
- Convolutional Neural Network Made Easy Part-3 (Contains: 1. Basics of Stride in Convolutional Neural Network, 2. Convolution and Pooling Operation with Higher Stride Values., 3. Some Insight about Strides.)
5. RNN (Recurrent Neural Networks).
- Deep Learning using Recurrent Neural Network Part-1 (Contains: 1. Basics of Recurrent Neural Network (RNN), 2. Architectural differences between RNN and DNN (Deep Neural Network).)
- Deep Learning using Recurrent Neural Network Part-2 (Contains: the use of "Back-propagation Through Time (BPTT)", to train the RNN.)
6. Long Short-Term Memory (LSTM)
- Long Short-Term Memory (LSTM) Part-1
- Long Short Term Memory (LSTM) part2
- Long Short Term Memory (LSTM) part-3
7. Deep Learning and Language Model
- Deep Learning and Language Model - Part-1
- Deep Learning and Language Model - Part-2
- Deep Learning and Language Model - Part-3
8. Word2Vec
- Word2vec: Continuous bag-of-words architecture Part-1
- Word2vec: Continuous bag-of-words based architecture Part-2
- Word2Vec-Skip-Gram (Part-1)
- Word2Vec-Skip-Gram (Part-2)
9. Attention Based Mechanism.
10. Transformer Model for NLP
- Transformer Model for NLP Part-1 (Contains: This is part -1 of the tutorial and discuss transformer model discussed in the paper: "Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. "Attention is all you need." In Advances in neural information processing systems, pp. 5998-6008. 2017.")
- Transformer Model for NLP Part-2 (Contains: This is part -2 of the tutorial and discuss transformer model discussed in the paper: "Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. "Attention is all you need." In Advances in neural information processing systems, pp. 5998-6008. 2017.")
11. BERT Model for NLP
- BERT PART 1 (Bidirectional Encoder Representations from Transformers) (Contains: 1. Important Points Related to BERT, 2. BERT Embedding Layer Architecture)
- BERT - Part-2 (Bidirectional Encoder Representations from Transformers) ( Contains: 1. Bi-directional Transformers inside BERT, 2. Bidirectional Self-Attention, 3. Multi-Headed Attention
- BERT - Part-3 (Bidirectional Encoder Representations from Transformers) (Contains: 1. Role of Layer Normalization in BERT, 2. Role of Residual Connections in BERT, 3. Overall Functioning
12. XLNet Made Easy
- XLNet Made Easy Part-1 ( Contains: 1. BERT Vs XLNet, 2.Overview of XLNet, 3. Autoregressive Language Modeling)
- XLNet Made Easy PART 2 (Contains: 1. Permutation Language Modeling for XLNet, 2. Merits and Demerits of Permutation Language Modeling.)
- XLNet Made Easy PART 3 (Contains: 1. Masked Attention for XLNet, 2. Two Stream Self Attention for XLNet, 3. Final Working Overview of XLNet )
13. Restricted Boltzmann Machine.
- Restricted Boltzmann Machine Part-1
- Restricted Boltzmann Machine Part-2
- Restricted Boltzmann Machine - Part-3
14. Deep Learning using Deep Belief Network.
15. Logistic Regression (Basics, Cost Function, Learning Weight Vectors, Example)
16. Transfer Learning
- Transfer Learning Part-1 (Overview of Multi-Task Learning)
- Transfer Learning Part-2 (Multi-Task Learning)
Scientist at TCS Research & Innovation
5yvery good Niraj. Nicely covered!
Head of AI | LLM | Data Scientist | Entrepreneur | Fin-Tech | Ed-Tech | ML Trainer
5yA gist of lots of studies in a single place. Awesome work. A great inspiration for others.
Lead Data Scientist at AIEnterprise Inc.
5ygreat work