DTSA 5513 Deep Learning for Computer Vision
Same as CSCA 5322
Specialization: Computer Vision
Instructors: Dr. Tom Yeh
Prior knowledge needed: Basic to intermediate Linear Algebra, Trigonometry, Vectors & Matrices
View on CourseraCourse Syllabus
Learning Outcomes
- Improve model performance and training stability using multilayer perceptrons (MLPs) and applying normalization techniques.
- Implement autoencoders for unsupervised feature learning and design Generative Adversarial Networks (GANs) to generate synthetic images.
- Train convolutional neural networks (CNNs) for image classification tasks, understanding how layers extract spatial features from visual data.
- Apply advanced architectures like ResNet for deep image recognition and U-Net for image segmentation.
Course Content
Duration: 5h
Welcome to Deep Learning for Computer Vision, the second course in the Computer Vision specialization. In this first module, you'll be introduced to the principles behind neural networks and their use in visual recognition tasks. You'll begin by learning the basic building blocks—neurons, weights, biases—and progress toward constructing simple multi-layer perceptrons. Then, you'll discover key activation concepts like batch processing and graph-matrix conversions. Finally, you will visualize neural networks with an emphasis on classification tasks.
Duration: 3h
In this module, you’ll explore two powerful architectures in deep learning: autoencoders and generative adversarial networks (GANs). You’ll begin by learning how autoencoders compress and reconstruct data using encoder-decoder structures, and how reconstruction loss is minimized through backpropagation and gradient descent. You’ll then examine the role of loss functions and optimization techniques in training these models. In the second half of the module, you’ll dive into GANs, where a generator and discriminator compete to produce realistic synthetic data. You’ll study how adversarial training works, how binary cross-entropy loss is applied, and how GANs are used to model complex data distributions. By the end of this module, you’ll be able to implement and evaluate both autoencoders and GANs for representation learning and data generation.
Duration: 3h
In this module, you’ll learn how convolutional neural networks extract features from images and perform classification. You’ll begin by building a tiny CNN by hand and in Excel, exploring convolution, max-pooling, and fully connected layers. Then, you’ll scale up to larger CNN architectures and examine how they process data through multiple convolution and pooling stages. You’ll also study how categorical cross-entropy loss and gradients are computed for training. Finally, you’ll walk through backpropagation across all CNN layers to understand how learning occurs.
Duration: 3h
In this module, you’ll explore two influential deep learning architectures: ResNet and U-Net. You’ll begin by learning how ResNet uses skip connections and residual learning to enable the training of very deep networks, addressing challenges like vanishing and exploding gradients. You’ll examine how residual blocks preserve information and support higher-order logic across layers. Then, you’ll shift to U-Net, a powerful architecture for image segmentation, and study its encoder-decoder structure, skip connections, and upsampling techniques like transposed convolution. By the end of this module, you’ll understand how both architectures enhance learning efficiency and performance in complex vision tasks.
Duration: 2h 10m
You will complete a non-proctored exam worth 20% of your grade. You must attempt the final in order to earn a grade in the course. If you've upgraded to the for-credit version of this course, please make sure you review the additional for-credit materials in the Introductory module and anywhere else they may be found.
Note: This page is periodically updated. Course information on the Coursera platform supersedes the information on this page. Click View on Coursera button above for the most up-to-date information.