-
Sharpen your knowledge in machine learning, and prepare for any potential question you might get in a machine learning interview in Python.
Have you ever wondered how to properly prepare for a Machine Learning Interview? Of course you have or you likely wouldn't be reading this right now! In this course, students will prepare to answer 15 common Machine Learning (ML) interview questions for a data scientist role in Python. These questions will revolve around 7 important topics: data preprocessing, data visualization, supervised learning, unsupervised learning, model ensembling, model selection, and model evaluation. By the end of the course, the students will possess both the required theoretical background and the ability to develop Python code to successfully answer these 15 questions. The coding examples will be mainly based on the scikit-learn package given its ease-of-use and ability to cover the most important ML techniques in the Python language.
The course does not teach machine learning fundamentals as these are covered in the course's prerequisites.
-
Data Pre-processing and Visualization
-In the first chapter of this course, you'll perform all the preprocessing steps required to create a predictive machine learning model, including what to do with missing values, outliers, and how to normalize your dataset.
Supervised Learning
-In the second chapter of this course, you'll practice different several aspects of supervised machine learning techniques, such as selecting the optimal feature subset, regularization to avoid model overfitting, feature engineering, and ensemble models to address the so-called bias-variance trade-off.
Unsupervised Learning
-In the third chapter of this course, you'll use unsupervised learning to apply feature extraction and visualization techniques for dimensionality reduction and clustering methods to select not only an appropriate clustering algorithm but optimal cluster number for a dataset.
Model Selection and Evaluation
-In the fourth and final chapter of this course, you'll really step it up and apply bootstrapping and cross-validation to evaluate performance for model generalization, resampling techniques to imbalanced classes, detect and remove multicollinearity, and build an ensemble model.