
Lecture Description
In the previous Machine Learning with Python tutorial we finished up making a forecast of stock prices using regression, and then visualizing the forecast with Matplotlib. In this tutorial, we'll talk about some next steps.
I remember the first time that I was trying to learn about machine learning, and most examples were only covering up to the training and testing part, totally skipping the prediction part. Of the tutorials that did the training, testing, and predicting part, I did not find a single one that explained saving the algorithm. With examples, data is generally pretty small overall, so the training, testing, and prediction process is relatively fast. In the real world, however, data is likely to be larger, and take much longer for processing. Since no one really talked about this important stage, I wanted to definitely include some information on processing time and saving your algorithm.
While our machine learning classifier takes a few seconds to train, there may be cases where it takes hours or even days to train a classifier. Imagine needing to do that every day you wanted to forecast prices, or whatever. This is not necessary, as we can just save the classifier using the Pickle module.
pythonprogramming.net
twitter.com/sentdex
www.facebook.com/pythonprogramming.net/
plus.google.com/+sentdex
Course Index
- Introduction to Machine Learning
- Regression Intro
- Regression Features and Labels
- Regression Training and Testing
- Regression forecasting and predicting
- Pickling and Scaling
- Regression How it Works
- How to program the Best Fit Slope
- How to program the Best Fit Line
- R Squared Theory
- Programming R Squared
- Testing Assumptions
- Classification w/ K Nearest Neighbors Intro
- K Nearest Neighbors Application
- Euclidean Distance
- Creating Our K Nearest Neighbors A
- Writing our own K Nearest Neighbors in Code
- Applying our K Nearest Neighbors Algorithm
- Final thoughts on K Nearest Neighbors
- Support Vector Machine Intro and Application
- Understanding Vectors
- Support Vector Assertion
- Support Vector Machine Fundamentals
- Support Vector Machine Optimization
- Creating an SVM from scratch
- SVM Training
- SVM Optimization
- Completing SVM from Scratch
- Kernels Introduction
- Why Kernels
- Soft Margin SVM
- Soft Margin SVM and Kernels with CVXOPT
- SVM Parameters
- Clustering Introduction
- Handling Non-Numeric Data
- K Means with Titanic Dataset
- Custom K Means
- K Means from Scratch
- Mean Shift Intro
- Mean Shift with Titanic Dataset
- Mean Shift from Scratch
- Mean Shift Dynamic Bandwidth
Course Description
The objective of this course is to give you a holistic understanding of machine learning, covering theory, application, and inner workings of supervised, unsupervised, and deep learning algorithms.
In this series, we'll be covering linear regression, K Nearest Neighbors, Support Vector Machines (SVM), flat clustering, hierarchical clustering, and neural networks.