Deep Learning Part 1 (IITM)

Go to class
Write Review

Free Online Course: Deep Learning Part 1 (IITM) provided by Swayam is a comprehensive online course, which lasts for 12 weeks long. The course is taught in English and is free of charge. Upon completion of the course, you can receive an e-certificate from Swayam. Deep Learning Part 1 (IITM) is taught by Sudarshan Iyengar.

Overview
  • Deep Learning has received a lot of attention over the past few years and has been employed successfully by companies like Google, Microsoft, IBM, Facebook, Twitter etc. to solve a wide range of problems in Computer Vision and Natural Language Processing. In this course we will learn about the building blocks used in these Deep Learning based solutions. Specifically, we will learn about feedforward neural networks, convolutional neural networks, recurrent neural networks and attention mechanisms. We will also look at various optimization algorithms such as Gradient Descent, Nesterov Accelerated Gradient Descent, Adam, AdaGrad and RMSProp which are used for training such deep neural networks. At the end of this course students would have knowledge of deep architectures used for solving various Vision and NLP tasks
    INTENDED AUDIENCE: Any Interested LearnersPREREQUISITES: Working knowledge of Linear Algebra, Probability Theory. It would be beneficial if the participants have
    done a course on Machine Learning.

Syllabus
  • COURSE LAYOUT

    Week 1: (Partial) History of Deep Learning, Deep Learning Success Stories, McCulloch Pitts Neuron, Thresholding Logic,
    Perceptrons, Perceptron Learning Algorithm
    Week 2: Multilayer Perceptrons (MLPs), Representation Power of MLPs, Sigmoid Neurons, Gradient Descent, Feedforward
    Neural Networks, Representation Power of Feedforward Neural Networks
    Week 3: FeedForward Neural Networks, Backpropagation
    Week 4: Gradient Descent (GD), Momentum Based GD, Nesterov Accelerated GD, Stochastic GD, AdaGrad, RMSProp, Adam,
    Eigenvalues and eigenvectors, Eigenvalue Decomposition, Basis
    Week 5: Principal Component Analysis and its interpretations, Singular Value Decomposition
    Week 6: Autoencoders and relation to PCA, Regularization in autoencoders, Denoising autoencoders, Sparse autoencoders,
    Contractive autoencoders
    Week 7: Regularization: Bias Variance Tradeoff, L2 regularization, Early stopping, Dataset augmentation, Parameter sharing
    and tying, Injecting noise at input, Ensemble methods, Dropout
    Week 8: Greedy Layerwise Pre-training, Better activation functions, Better weight initialization methods, Batch Normalization
    Week 9: Learning Vectorial Representations Of Words
    Week 10: Convolutional Neural Networks, LeNet, AlexNet, ZF-Net, VGGNet, GoogLeNet, ResNet, Visualizing Convolutional
    Neural Networks, Guided Backpropagation, Deep Dream, Deep Art, Fooling Convolutional Neural Networks
    Week 11: Recurrent Neural Networks, Backpropagation through time (BPTT), Vanishing and Exploding Gradients, Truncated BPTT, GRU, LSTMs
    Week 12: Encoder Decoder Models, Attention Mechanism, Attention over images