
Lecture Description
Once we've determined that we can use Kernels, the next question is of course why would we bother using kernels when we can use some other function to transform our data into more dimensions. The point of using Kernels is to be able to perform a calculation (inner product in this case) in another dimension without actually needing to work in that dimension.
pythonprogramming.net
twitter.com/sentdex
www.facebook.com/pythonprogramming.net/
plus.google.com/+sentdex
Course Index
- Introduction to Machine Learning
- Regression Intro
- Regression Features and Labels
- Regression Training and Testing
- Regression forecasting and predicting
- Pickling and Scaling
- Regression How it Works
- How to program the Best Fit Slope
- How to program the Best Fit Line
- R Squared Theory
- Programming R Squared
- Testing Assumptions
- Classification w/ K Nearest Neighbors Intro
- K Nearest Neighbors Application
- Euclidean Distance
- Creating Our K Nearest Neighbors A
- Writing our own K Nearest Neighbors in Code
- Applying our K Nearest Neighbors Algorithm
- Final thoughts on K Nearest Neighbors
- Support Vector Machine Intro and Application
- Understanding Vectors
- Support Vector Assertion
- Support Vector Machine Fundamentals
- Support Vector Machine Optimization
- Creating an SVM from scratch
- SVM Training
- SVM Optimization
- Completing SVM from Scratch
- Kernels Introduction
- Why Kernels
- Soft Margin SVM
- Soft Margin SVM and Kernels with CVXOPT
- SVM Parameters
- Clustering Introduction
- Handling Non-Numeric Data
- K Means with Titanic Dataset
- Custom K Means
- K Means from Scratch
- Mean Shift Intro
- Mean Shift with Titanic Dataset
- Mean Shift from Scratch
- Mean Shift Dynamic Bandwidth
Course Description
The objective of this course is to give you a holistic understanding of machine learning, covering theory, application, and inner workings of supervised, unsupervised, and deep learning algorithms.
In this series, we'll be covering linear regression, K Nearest Neighbors, Support Vector Machines (SVM), flat clustering, hierarchical clustering, and neural networks.