Back to Iv Year
📘 DL
DEEP LEARNING
Access study materials and notes for this subject
DL Unit 1: Machine Learning Basics and Deep Feedforward Networks
PDF Document
Preview
Download
DL Unit 2: Regularization and Optimization for Deep Learning
PDF Document
Preview
Download
DL Unit 3: Convolutional Neural Networks
PDF Document
Preview
Download
DL Unit 4: Recurrent and Recursive Networks
PDF Document
Preview
Download
DL Unit 5: Practical Methodology and Applications
PDF Document
Preview
Download
Syllabus Overview
UNIT - I Machine Learning Basics and Deep Feedforward Networks
Foundations of Machine Learning
Learning Algorithms
Model Capacity
Overfitting and Underfitting
Hyperparameters and Validation Sets
Estimators, Bias and Variance Trade-off
Maximum Likelihood Estimation
Bayesian Statistics Overview
Supervised Learning Algorithms
Unsupervised Learning Algorithms
Stochastic Gradient Descent (SGD)
Building a Machine Learning Algorithm
Challenges Motivating Deep Learning
Deep Feedforward Networks
Learning XOR Problem
Gradient-Based Learning
Activation Functions and Hidden Units
Neural Network Architecture Design
Backpropagation Algorithm
Other Differentiation Algorithms (e.g., Forward-mode, Automatic Differentiation)
UNIT - II Regularization and Optimization for Deep Learning
Regularization Techniques
Parameter Norm Penalties (L1, L2)
Norm Penalties as Constrained Optimization
Regularization in Under-Constrained Problems
Dataset Augmentation
Noise Robustness
Semi-Supervised Learning
Multi-Task Learning
Early Stopping
Parameter Tying and Sharing
Sparse Representations
Bagging and Ensemble Methods
Dropout
Adversarial Training
Tangent Distance, Tangent Prop, Manifold Tangent Classifier
Optimization for Deep Models
Learning vs Pure Optimization
Challenges in Neural Network Optimization
Basic Optimization Algorithms (SGD, Momentum, Nesterov)
Parameter Initialization Strategies
Algorithms with Adaptive Learning Rates (AdaGrad, RMSProp, Adam)
UNIT - III Convolutional Neural Networks
Convolutional Networks (CNNs)
The Convolution Operation
Motivation for CNNs (Translation Invariance, Parameter Sharing)
Pooling Layers (Max, Average)
Convolution and Pooling as Infinitely Strong Priors
Variants of Convolution (Dilated, Depthwise, Separable)
Structured Outputs with CNNs
Handling Different Data Types (Images, Volumes, Sequences)
Efficient Convolution Algorithms (FFT, Winograd)
Random or Unsupervised Feature Learning
UNIT - IV Recurrent and Recursive Networks
Recurrent Neural Networks (RNNs)
Unfolding Computational Graphs
Basic RNN Architecture
Bidirectional RNNs
Encoder-Decoder Sequence-to-Sequence Models
Deep Recurrent Networks
Recursive Neural Networks (Tree-structured)
Challenge of Long-Term Dependencies
Echo State Networks
Leaky Units and Multi-Time Scale Strategies
Long Short-Term Memory (LSTM)
Gated Recurrent Units (GRUs)
Optimization Techniques for Long-Term Dependencies
Explicit Memory Architectures (e.g., Neural Turing Machines)
UNIT - V Practical Methodology and Applications
Practical Deep Learning Methodology
Performance Metrics (Accuracy, Precision, Recall, F1, AUC, BLEU, etc.)
Default Baseline Models
Determining Whether to Gather More Data
Hyperparameter Selection and Tuning
Debugging Strategies (Gradient Checks, Visualization, Ablation)
Case Study: Multi-Digit Number Recognition
Applications of Deep Learning
Large-Scale Deep Learning Systems
Computer Vision (Image Classification, Object Detection, Segmentation)
Speech Recognition and Synthesis
Natural Language Processing (Machine Translation, Text Generation, Sentiment Analysis)
Other Applications (Healthcare, Robotics, Recommender Systems, Game AI)
DEEP LEARNING Notes