Learn how Deep Learning works and how people apply it to solve data science problems.
This course has been created, designed and assembled by professional Data Scientists who have worked in this field for nearly a decade. We can help you understand the complex Deep Learning algorithms from the very base while keeping you grounded to the implementation on real data science problems.
We are sure that you will have fun while learning from our tried and tested structure of course to keep you interested in what’s coming next.
Here is how the course is going to work:
- Session 1 – Machine Learning Basics.
- This is the part where we will learn basic perquisite concepts of machine learning.
- Concepts like regression, logistic regression, model validation techniques etc.
- You may skip this session if you are familiar with these concepts.
- Session 2 – Introduction to Artificial Neural Networks
- We will build our intuition behind the Artificial Neural Network algorithm from logistic regression.
- We will cover concepts like: Hidden Layers, Decision Boundary in ANN, Back Propagation, Model Optimization etc.
- Train a basic ANN model using Python
- Session 3 – TensorFlow and Keras
- In this mostly practice based session we will understand the very basics of using Tensorflow and see why Keras works best for us.
- We will also build our very first Deep Learning model using both Tensorflow and Keras.
- Session 4 – ANN Hyper Parameters
- This session is more about fine tuning an ANN model and regularization to combat over-fitting.
- We will cover learning rate, momentum, dropout etc
- Session 5 – CNN : Convolutional Neural Networks
- Modified advanced version of ANNs to accommodate Image kind of data.
- We will go thorough how CNN model learns new features from given image data and classifies images with amazing accuracy while building our own CNN model in Keras.
- We will understand concepts like Convolutions layers, Pooling Layers and intuition behind CNN architectures.
- Session 6 – Recurrent Neural Networks (RNN)
- Another modified version of ANN to accommodate text and sequential data and perform basic NLP tasks.
- We will understand the inner working of RNN models and build word a character predictor model that will predict next word given previous words.
- Session 7 – Long Short Term Memory (LSTM)
- A modified advanced version of RNN that overcomes the short-comings of vanilla RNN and build better text based models.
- We will also train LSTM model that will outperform RNN model for character prediction.
Features:
- Fully packed with LAB Sessions. One to learn from and one for you to do it yourself.
- Course includes Python source code, Datasets and other supporting material at the beginning of each section for you to download and use on your own.
- Quiz after each section to test your learning.
Bonus:
- We are always updating our content and adding more and more conceptual details in the course.
- New projects and case studied will be added to the course in coming time and will be free to access for existing and new students of this course.
Prerequisite:
- This course is designed with expectations that students taking the course are familiar with basic python programming for data-science.
- You may take our course: Introduction to Python Programming as prerequisite to brush up your python programming skills.
Course Curriculum
Section 1 - Machine Learning Basics | |||
Datasets – Machine Learning Basics | 00:00:00 | ||
1. Regression | 00:00:00 | ||
2. Regression LAB | 00:00:00 | ||
3. Logistic regression | 00:00:00 | ||
4. Logit function | 00:00:00 | ||
5. Building a logistic Regression Line | 00:00:00 | ||
6. Multiple logistic regression | 00:00:00 | ||
7. Validation Matrices – Classification Matrix | 00:00:00 | ||
8. Sensitivity and Specificity | 00:00:00 | ||
9. Sensitivity vs Specificity | 00:00:00 | ||
10. Sensitivity Specificity LAB | 00:00:00 | ||
11. ROC and AUC | 00:00:00 | ||
12. ROC and AUC LAB | 00:00:00 | ||
13. The training error | 00:00:00 | ||
14. Over Fitting and Under Fitting | 00:00:00 | ||
15. Bias Variance Tradeoff | 00:00:00 | ||
16. Holdout data validation | 00:00:00 | ||
17. Hold Out data validation LAB | 00:00:00 | ||
Section 2 - Introduction to ANN | |||
Datasets – Introduction to ANN | 00:00:00 | ||
1. Introduction to ANN | 00:00:00 | ||
2. Logistic Regression Recap LAB | 00:00:00 | ||
3. Decision Boundry – Logistic Regression | 00:00:00 | ||
4. Decision Boundry – LAB | 00:00:00 | ||
5. New Representation for Logistic Regression | 00:00:00 | ||
6. Non Linear Decision Boundry – Problem | 00:00:00 | ||
7. Non Linear Decision Boundry – Solution | 00:00:00 | ||
8. Intermediate Output LAB | 00:00:00 | ||
9. Neural Network Intution | 00:00:00 | ||
10. Neural Network Algorithm | 00:00:00 | ||
11. Demo Neural Network Algorithm | 00:00:00 | ||
12. Neural Network LAB | 00:00:00 | ||
13. Local Minima and Number of Hidden Layers | 00:00:00 | ||
14. Digit Recogniser Lab | 00:00:00 | ||
15. Conclusion | 00:00:00 | ||
Section 3 - TensorFLow and Keras | |||
3.1 Introduction to Deep Learning Frameworks | 00:00:00 | ||
3.2 Key Terms of Tensorflow | 00:00:00 | ||
3.3 Coding basics in Tensorflow | 00:00:00 | ||
3.4 Model building intution | 00:00:00 | ||
3.5 LAB Building Linear and Logistic regression models with Tensorflow | 00:00:00 | ||
3.6 LAB MNIST model using tensorflow | 00:00:00 | ||
3.7 Tensorflow shortcomings and Intro to Keras | 00:00:00 | ||
3.8 LAB MNIST model using Keras | 00:00:00 | ||
3.9 Tensorflow vs Keras and conclusion | 00:00:00 | ||
Section 4 ANN Hyperparameters | |||
Datasets – ANN Hyperparameters | 00:00:00 | ||
4.1 Introduction to Hyperparameters | 00:00:00 | ||
4.2 LAB_calculating number of parameters | 00:00:00 | ||
4.3 Regularization | 00:00:00 | ||
4.4 Over-fitting of a Regression Model LAB | 00:00:00 | ||
4.5 Regularization in Regression LAB | 00:00:00 | ||
4.7 Demo_Regularization in Neural Networks | 00:00:00 | ||
4.8 Dropout Regularization | 00:00:00 | ||
4.9 LAB_ Dropout Regularization | 00:00:00 | ||
4.10 Weight sharing in Dropout | 00:00:00 | ||
4.11 Early stopping | 00:00:00 | ||
4.12 LAB_ Early stopping notebook | 00:00:00 | ||
4.13 Activation Function | 00:00:00 | ||
4.14 Demo_Activation Function | 00:00:00 | ||
4.15 Problem of Vanishing Gradients | 00:00:00 | ||
4.16 ReLU activation Function | 00:00:00 | ||
4.17 Activation Function for Last Layer | 00:00:00 | ||
4.18 Learning Rate | 00:00:00 | ||
4.19 Demo_ Learning Rate | 00:00:00 | ||
4.20 Momentum | 00:00:00 | ||
4.21 LAB Learning rate and momentum | 00:00:00 | ||
4.22 Gradient Descent Batches | 00:00:00 | ||
4.23 LAB Gradient Descent vs Mini Batch | 00:00:00 | ||
4.24 Hyper Parameter conclusion | 00:00:00 | ||
Section 5 Convolution Neural Networks (CNN) | |||
5.1 Introduction to CNN – fundamentals of Image data | 00:00:00 | ||
5.2 LAB ANN on Image data – MNIST | 00:00:00 | ||
5.3 Counting parameters of ANN for Imge Data | 00:00:00 | ||
5.4 LAB Parameter count in ANN on large Images | 00:00:00 | ||
5.5 Issue with ANN on Image Data | 00:00:00 | ||
5.6 Preserving Spatial Integrity of Images in Neural Network | 00:00:00 | ||
5.7 How filters work | 00:00:00 | ||
5.8 Kernal Matrix and Convoluted Layers | 00:00:00 | ||
5.9 Convoluted Features | 00:00:00 | ||
5.10 LAB Convolution Layer | 00:00:00 | ||
5.11 Handling edges of Image in convolution | 00:00:00 | ||
5.12 Depth of Convolutions | 00:00:00 | ||
5.13 Number of Weights in Convolution Layers | 00:00:00 | ||
5.14 Pooling Convolution Layers | 00:00:00 | ||
5.15 LAB Pooling | 00:00:00 | ||
5.16 The CNN Architecture | 00:00:00 | ||
5.17 LAB CNN on MNIST | 00:00:00 | ||
5.18 Conclusion | 00:00:00 | ||
Section 6 Recurrent Neural Netoworks (RNN) | |||
Datasets – Recurrent Neural Netoworks_RNN | 00:00:00 | ||
6.1 Introduction to RNN | 00:00:00 | ||
6.2 Sequential Models | 00:00:00 | ||
6.3 Sequential ANNs | 00:00:00 | ||
6.4 LAB Sequential ANNs | 00:00:00 | ||
6.5 RNNs The Programmed Sequential Models | 00:00:00 | ||
6.6 BackPropagation in RNNs | 00:00:00 | ||
6.7 Number of Parameters in RNN Models | 00:00:00 | ||
6.8 BPTT Details | 00:00:00 | ||
6.9 LAB RNN Model Building | 00:00:00 | ||
6.10 Issues with RNNs | 00:00:00 | ||
6.11 RNN Conclusion | 00:00:00 | ||
Section 7 LSTM | |||
Datasets – LSTM | 00:00:00 | ||
7.1 Introduction to LSTM | 00:00:00 | ||
7.2 LSTM What is Vanishing Gradient | 00:00:00 | ||
7.3 Mathematics of Vanishing Gradinets | 00:00:00 | ||
7.4 LAB_Vanishing Gradients | 00:00:00 | ||
7.5 RNN Other Issues LSTM main idea | 00:00:00 | ||
7.6 LSTM Gates | 00:00:00 | ||
7.7 LSTM Different Representations | 00:00:00 | ||
7.8 LAB LSTM | 00:00:00 | ||
7.9 LSTM Conlcusion | 00:00:00 |
5 STUDENTS ENROLLED