The ‘Data Science: Modern Deep Learning in Python’ course gives an in-depth knowledge of Tensorflow, Theano, Keras, PyTorch, CNTK, and MXNet. In this course students will learn about batch and stochastic gradient descent, two commonly used techniques that allow them to train on just a small sample of the data at each iteration.
This course is for students who want to advance their knowledge of neural networks and deep learning. Also, students will discover how to speed up their training by using adaptive learning rate approaches like AdaGrad, RMSprop, and Adam. The course is usually available for INR 2,699 on Udemy but students can click on the link and get the ‘Data Science: Modern Deep Learning in Python’ for INR 449.
Who all can opt for this course?
- Those who want to learn more about machine learning, both professionals and students
- Data scientists interested in deep learning
- Data scientists who are familiar with gradient descent and backpropagation who want to enhance it with stochastic batch training, momentum, and adaptive learning rate techniques like RMSprop
- Students who are unfamiliar with backpropagation or softmax, can take up Instructor’s earlier course on deep learning in Python
Course Highlights
Key Highlights | Details |
---|---|
Registration Link | Apply Now! |
Price | INR 449 ( |
Duration | 11.5 Hours |
Rating | 4.8/5 |
Student Enrollment | 33,668 students |
Instructor | Lazy Programmer Inc. https://www.linkedin.com/in/lazyprogrammerinc. |
Topics Covered | PyTorch, Keras, CNTK, MXNet, Deep Learning, Stochastic Gradient Descent |
Course Level | Advanced |
Total Student Reviews | 3,068 |
Learning Outcomes
- Use backpropagation with momentum to train neural networks
- Use backpropagation with adaptive learning rate techniques such as AdaGrad, RMSprop, and Adam to train neural networks
- Know TensorFlow’s fundamental building blocks
- With TensorFlow, create a neural network
- With Keras, create a neural network
- With PyTorch, create a neural network
- Recognize the distinctions between stochastic gradient descent, batch gradient descent, and full gradient descent
- Recognize and put dropout regularisation into practise
- Recognize batch normalisation and use it
- Recognize Theano’s fundamental building blocks
- Theano neural network construction
- Use CNTK to create a neural network
- Use MXNet to create a neural network
Course Content
S.No. | Module (Duration) | Topics |
---|---|---|
1. | Introduction and Outline (30 minutes) | Introduction and Outline |
Where to get the Code | ||
How to Succeed in this Course | ||
2. | Review (01 hour 00 minutes) | Review (pt 1): Neuron Predictions |
Review (pt 2): Neuron Learning | ||
Review (pt 3): Artificial Neural Networks | ||
Review Exercise Prompt | ||
Review Code (pt 1) | ||
Review Code (pt 2) | ||
Review Summary | ||
3. | Stochastic Gradient Descent and Mini-Batch Gradient Descent (43 minutes) | Stochastic Gradient Descent and Mini-Batch Gradient Descent (Theory) |
SGD Exercise Prompt | ||
Stochastic Gradient Descent and Mini-Batch Gradient Descent (Code pt 1) | ||
Stochastic Gradient Descent and Mini-Batch Gradient Descent (Code pt 2) | ||
4. | Momentum and adaptive learning rates (01 hour 08 minutes) | Using Momentum to Speed Up Training |
Nesterov Momentum | ||
Momentum in Code | ||
Variable and adaptive learning rates | ||
Constant learning rate vs. RMSProp in Code | ||
Adam Optimization (pt 1) | ||
Adam Optimization (pt 2) | ||
Adam in Code | ||
Suggestion Box | ||
5. | Choosing Hyperparameters (18 minutes) | Hyperparameter Optimization: Cross-validation, Grid Search, and Random Search |
Sampling Logarithmically | ||
Grid Search in Code | ||
Modifying Grid Search | ||
Random Search in Code | ||
6. | Weight Initialization (19 minutes) | Weight Initialization Section Introduction |
Vanishing and Exploding Gradients | ||
Weight Initialization | ||
Local vs. Global Minima | ||
Weight Initialization Section Summary | ||
7. | Theano (27 minutes) | Theano Basics: Variables, Functions, Expressions, Optimization |
Building a neural network in Theano | ||
Is Theano Dead? | ||
8. | TensorFlow (31 minutes) | TensorFlow Basics: Variables, Functions, Expressions, Optimization |
Building a neural network in TensorFlow | ||
What is a Session? (And more) | ||
9. | GPU Speedup, Homework, and Other Misc Topics (43 minutes) | Setting up a GPU Instance on Amazon Web Services |
Installing NVIDIA GPU-Accelerated Deep Learning Libraries on your Home Computer | ||
Can Big Data be used to Speed Up Backpropagation? | ||
How to Improve your Theano and Tensorflow Skills | ||
Theano vs. TensorFlow | ||
10. | Transition to the 2nd Half of the Course (05 minutes) | Transition to the 2nd Half of the Course |
11. | Project: Facial Expression Recognition (01 hour 04 minutes) | Facial Expression Recognition Project Introduction |
Facial Expression Recognition Problem Description | ||
The class imbalance problem | ||
Utilities walkthrough | ||
Class-Based ANN in Theano | ||
Class-Based ANN in TensorFlow | ||
Facial Expression Recognition Project Summary | ||
12. | Modern Regularization Techniques (25 minutes) | Modern Regularization Techniques Section Introduction |
Dropout Regularization | ||
Dropout Intuition | ||
Noise Injection | ||
Modern Regularization Techniques Section Summary | ||
13. | Batch Normalization (42 minutes) | Batch Normalization Introduction |
Exponentially-Smoothed Averages | ||
Batch Normalization Theory | ||
Batch Normalization Tensorflow (part 1) | ||
Batch Normalization Tensorflow (part 2) | ||
Batch Normalization Theano (part 1) | ||
Batch Normalization Theano (part 2) | ||
Noise Perspective | ||
Batch Normalization Summary | ||
14. | Keras (19 minutes) | Keras Discussion |
Keras in Code | ||
Keras Functional API | ||
How to easily convert Keras into Tensorflow 2.0 code | ||
15. | PyTorch (17 minutes) | PyTorch Basics |
PyTorch Dropout | ||
PyTorch Batch Norm | ||
16. | PyTorch, CNTK, and MXNet (48 seconds) | PyTorch, CNTK, and MXNet |
17. | Deep Learning Review Topics (12 minutes) | What’s the difference between “neural networks” and “deep learning”? |
Manually Choosing Learning Rate and Regularization Penalty | ||
18. | Setting Up Your Environment (FAQ by Student Request) (37 minutes) | Anaconda Environment Setup |
How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow | ||
19. | Extra Help With Python Coding for Beginners (FAQ by Student Request) (45 minutes) | How to Code by Yourself (part 1) |
How to Code by Yourself (part 2) | ||
Proof that using Jupyter Notebook is the same as not using it | ||
How to Uncompress a .tar.gz file | ||
Python 2 vs Python 3 | ||
20. | Effective Learning Strategies for Machine Learning (FAQ by Student Request) (59 minutes) | How to Succeed in this Course (Long Version) |
Is this for Beginners or Experts? Academic or Practical? Fast or slow-paced? | ||
Machine Learning and AI Prerequisite Roadmap (pt 1) | ||
Machine Learning and AI Prerequisite Roadmap (pt 2) | ||
21. | Appendix / FAQ Finale (08 minutes) | What is the Appendix? |
BONUS |
Resources Required
- Be familiar with Matplotlib, Numpy, and Python
- Refer earlier course of the instructor, Deep Learning in Python, if you are unfamiliar with gradient descent, backprop, and softmax.
Featured Review
Shikhar Kothari (5/5) : the instructor does an excellent job of explaining all the concepts. it’s extremely practical as the course title suggests. a handful of simple tweaks that make deep learning work better.
Pros
- Ghaith Omar (5/5) : Another great course, in depth explanation, coding from scratch in both tensorflow and theano.
- Timo K. (5/5) : Particularly great in my opinion were the sections on momentum and adaptive learning rates as well as weight initialization.
- John Joachim (5/5) : Keep up the great work, and I’ll continue to purchase your Courses.
- Baruna Nagarajan (5/5) : lots of practical tips included and includes great tutorials on theano and tensorflow
Cons
- Anonymized U. (2.5/5): This doesn’t teach much about Theano or TensorFlow except just showing the example code.
- Christopher L. (2/5): This course offers very uneven coverage of deep learning. Most of the material is dated. As for organization it seems that most of the lessons are cut and pasted from other courses.
- Maximilian T. (2/5): The examples may run but unfortunately the course is not really a lecture but more of a presentation of what the lecturer did.
About the Author
The instructor of this course is Lazy Programmer Inc. who is a Artificial intelligence and machine learning engineer. With 4.6 Instructor Rating and 148,475 Reviews on Udemy, he/she offers 33 Courses and has taught 527,254 Students so far.
- Although Instructor have also been recognised as a data scientist, big data engineer, and full stack software engineer, Instructor currently spend the majority of his/her time as an artificial intelligence and machine learning engineer with an emphasis on deep learning
- Instructor earned his/her first master’s degree in computer engineering with a focus on machine learning and pattern recognition more than ten years ago
- Instructor’s second master’s degree in statistics with a focus on financial engineering was awarded to the Instructor
- Data scientist and big data engineer with experience in online advertising and digital media (optimising click and conversion rates) (building data processing pipelines)
- Instructor routinely use big data technologies like Hadoop, Pig, Hive, MapReduce, and Spark
- Instructor developed deep learning models for text modelling, image and signal processing, user behaviour prediction, and click-through rate estimation
- In Instructor’s work with recommendation systems, Instructor used collaborative filtering and reinforcement learning, and they validated the findings using A/B testing
- Instructor have instructed students at universities like Columbia University, NYU, Hunter College, and The New School in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics
- Instructor’s web programming skills have helped numerous businesses
- Instructor handle all of the server-side backend work, frontend HTML/JS/CSS work, and operations/deployment work.
Comparison Table
Parameters | Data Science: Modern Deep Learning in Python | Unsupervised Deep Learning in Python | Deep Learning Prerequisites: Linear Regression in Python |
---|---|---|---|
Offers | INR 455 ( | INR 455 ( | INR 455 ( |
Duration | 11.5 hours | 10 hours | 6.5 hours |
Rating | 4.8/5 | 4.8/5 | 4.6/5 |
Student Enrollments | 33,668 | 20,217 | 31,467 |
Instructors | Lazy Programmer Inc. | Lazy Programmer Team | Lazy Programmer Inc. |
Register Here | Apply Now! | Apply Now! | Apply Now! |
Leave feedback about this