This intermediate track provides a comprehensive, college-level introduction to deep learning. Starting with the mathematical foundations of neural networks and backpropagation, you will progress through convolutional networks for computer vision, recurrent architectures for sequential data, the transformer revolution powering modern NLP, and generative models including GANs and diffusion models. The course covers transfer learning, ethical AI deployment, and culminates in a capstone project where you build a complete image classification pipeline with PyTorch.
Understand perceptrons, activation functions, the forward pass, and the universal approximation theorem that underpins all of deep learning
Master loss functions, backpropagation, gradient descent variants, learning rate schedules, batch normalization, and regularization techniques
Learn the convolution operation, pooling layers, and landmark architectures from LeNet to ResNet for image classification
Explore vanilla RNNs, the vanishing gradient problem in sequences, LSTMs, GRUs, and sequence-to-sequence models
Understand attention mechanisms, self-attention, multi-head attention, positional encoding, and the transformer architecture that powers BERT and GPT
Explore autoencoders, variational autoencoders, generative adversarial networks, and diffusion models for generating new data
Learn to leverage pretrained models for new tasks through feature extraction, fine-tuning, domain adaptation, and few-shot learning
Explore bias in training data, model interpretability, scaling laws, RLHF, and responsible AI deployment practices
Build a complete image classification pipeline using PyTorch — from data loading through training, evaluation, and inference
Comprehensive 20-question assessment covering all topics in the Deep Learning track