
Lecture Description
In this video, make sure you define the X's like so. I flipped the last two lines by mistake:
X = np.array(df.drop(['label'],1))
X = preprocessing.scale(X)
X_lately = X[-forecast_out:]
X = X[:-forecast_out:]
To forecast out, we need some data. We decided that we're forecasting out 10% of the data, thus we will want to, or at least *can* generate forecasts for each of the final 10% of the dataset. So when can we do this? When would we identify that data? We could call it now, but consider the data we're trying to forecast is not scaled like the training data was. Okay, so then what? Do we just do preprocessing.scale() against the last 10%? The scale method scales based on all of the known data that is fed into it. Ideally, you would scale both the training, testing, AND forecast/predicting data all together. Is this always possible or reasonable? No. If you can do it, you should, however. In our case, right now, we can do it. Our data is small enough and the processing time is low enough, so we'll preprocess and scale the data all at once.
In many cases, you wont be able to do this. Imagine if you were using gigabytes of data to train a classifier. It may take days to train your classifier, you wouldn't want to be doing this every...single...time you wanted to make a prediction. Thus, you may need to either NOT scale anything, or you may scale the data separately. As usual, you will want to test both options and see which is best in your specific case.
With that in mind, let's handle all of the rows from the definition of X onward.
pythonprogramming.net/forecasting-predicting-machine-learning-tutorial/
twitter.com/sentdex
www.facebook.com/pythonprogramming.net/
plus.google.com/+sentdex
Course Index
- Introduction to Machine Learning
- Regression Intro
- Regression Features and Labels
- Regression Training and Testing
- Regression forecasting and predicting
- Pickling and Scaling
- Regression How it Works
- How to program the Best Fit Slope
- How to program the Best Fit Line
- R Squared Theory
- Programming R Squared
- Testing Assumptions
- Classification w/ K Nearest Neighbors Intro
- K Nearest Neighbors Application
- Euclidean Distance
- Creating Our K Nearest Neighbors A
- Writing our own K Nearest Neighbors in Code
- Applying our K Nearest Neighbors Algorithm
- Final thoughts on K Nearest Neighbors
- Support Vector Machine Intro and Application
- Understanding Vectors
- Support Vector Assertion
- Support Vector Machine Fundamentals
- Support Vector Machine Optimization
- Creating an SVM from scratch
- SVM Training
- SVM Optimization
- Completing SVM from Scratch
- Kernels Introduction
- Why Kernels
- Soft Margin SVM
- Soft Margin SVM and Kernels with CVXOPT
- SVM Parameters
- Clustering Introduction
- Handling Non-Numeric Data
- K Means with Titanic Dataset
- Custom K Means
- K Means from Scratch
- Mean Shift Intro
- Mean Shift with Titanic Dataset
- Mean Shift from Scratch
- Mean Shift Dynamic Bandwidth
Course Description
The objective of this course is to give you a holistic understanding of machine learning, covering theory, application, and inner workings of supervised, unsupervised, and deep learning algorithms.
In this series, we'll be covering linear regression, K Nearest Neighbors, Support Vector Machines (SVM), flat clustering, hierarchical clustering, and neural networks.