08 Support Vector Machines
Preparation¶
Ch 5
Material¶
Watch the video lectures below. here is the material from the videos.
Session Description¶
This lecture introduces a more sophisticated classification methods within machine learning. We will explore the theory and application of Support Vector Machines (SVM).
We will start with the basic concepts of SVM, including the idea of support vectors and how they influence the decision boundary. We will then discuss how to make non-linearly separable data linearly separable by adding new features. The kernel trick will be introduced as a powerful method to handle non-linear data, and we will explore how different kernels can be used in SVM.
Key Concepts¶
- SVM Classification
- Support Vectors
- Decision Boundary
- Linear Separability
- Feature Transformation
- Kernel Trick
- Kernel Functions
- Non-linear Classification
Learning Objectives¶
After attending this lecture and reading the corresponding part of the book, I expect you to be able to:
- Explain support vectors for linearly separable data, and how support vectors influence the decision boundary.
- Explain and exemplify how adding new features can make non-linearly separable data linearly separable.
- Discuss the key ideas behind the kernel trick and how this is used in kernelized support vector machines.
- Discuss how, when using a Gaussian kernel, the hyperparameters C and γ influence the decision boundaries.
- Discuss advantages and disadvantages of support vector machines.
- Discuss the concept of hyperparameters in the context of SVM, and demonstrate how to tune them to improve model performance.
Video Lectures¶
Playlist with all videos (Opens in Youtube)