Classical Machine Learning for Financial Engineering

Go to class
Write Review

Classical Machine Learning for Financial Engineering provided by edX is a comprehensive online course, which lasts for 7 weeks long, 4-6 hours a week. Classical Machine Learning for Financial Engineering is taught by Ken Perry. Upon completion of the course, you can receive an e-certificate from edX. The course is taught in Englishand is Paid Course. Visit the course page at edX for detailed price information.

Overview
  • Classical Machine Learning refers to well established techniques by which one makes inferences from data. This course will introduce a systematic approach (the “Recipe for Machine Learning”) and tools with which to accomplish this task. In addition to the typical models and algorithms taught (e.g., Linear and Logistic Regression) this course emphasizes the whole life cycle of the process, from data set acquisition and cleaning to analysis of errors, all in the service of an iterative process for improving inference.

    Our belief is that Machine Learning is an experimental process and thus, most learning will be achieved by “doing”. We will jump-start your experimentation: Engineering first, then math. Early lectures will be a "sprint" to get you programming and experimenting. We will subsequently revisit topics on a greater mathematical basis.

Syllabus
  • Week 1: Classical Machine Learning: Overview

    • What is Machine Learning (ML) ?

    • ML and Finance; not ML for Finance

    • Classical Machine Learning: Introduction

    • Supervised Learning

    • Our first predictor

    • Notational conventions

    Week 2: Linear regression. Recipe for Machine Learning

    • Linear Regression

    • The Recipe for Machine Learning

    • The Regression Loss Function

    • Bias and Variance

    Week 3: Transformations, Classification

    • Data Transformations: Introduction and mechanics

    • Logistic Regression

    • Non-numeric variables: text, images

    • Multinomial Classification

    • The Classification Loss Function

    Week 4: Classification continued, Error Analysis

    • Baseline model

    • The Dummy Variable Trap

    • Transformations

    • Loss functions: mathematics

    Week 5: More Models: Trees, Forests, Naive Bayes

    • Entropy, Cross Entropy, KL Divergence

    • Decision Trees

    • Naive Bayes

    • Ensembles

    • Feature Importance

    Week 6: Support Vector Machines, Gradient Descent, Interpretation

    • Support Vector Classifiers

    • Gradient Descent

    • Interpretation: Linear Models

    Week 7: Unsupervised Learning, Dimensionality Reduction

    • Unsupervised Learning

    • Dimensionality Reduction

    • Clustering

    • Principal Components

    • Pseudo Matrix Factorization: preview of Deep Learning