Theory of Generalization 
Theory of Generalization
by Caltech / Yaser Abu-Mostafa
Video Lecture 6 of 18
Copyright Information: Produced in association with Caltech Academic Media Technologies under the Attribution-NonCommercial-NoDerivs Creative Commons License (CC BY-NC-ND).
Not yet rated
Views: 1,171
Date Added: January 17, 2015

Lecture Description

Topics: How an infinite model can learn from a finite sample. The most important theoretical result in machine learning. Lecture 6 of 18 of Caltech's Machine Learning Course - CS 156 by Professor Yaser Abu-Mostafa. This lecture was recorded on April 19, 2012, in Hameetman Auditorium at Caltech, Pasadena, CA, USA.

Course Index

Course Description

This is an introductory course by Caltech Professor Yaser Abu-Mostafa on machine learning that covers the basic theory, algorithms, and applications. Machine learning (ML) enables computational systems to adaptively improve their performance with experience accumulated from the observed data. ML techniques are widely applied in engineering, science, finance, and commerce to build systems for which we do not have full mathematical specification (and that covers a lot of systems). The course balances theory and practice, and covers the mathematical as well as the heuristic aspects. The course has 18 video lectures, each mainly focused on mathematical (theory), practical (technique), or conceptual (analysis) aspects. Check out the official course website for more information, including a very informative list of topics: https://work.caltech.edu/library/

Comments

There are no comments. Be the first to post one.
  Post comment as a guest user.
Click to login or register:
Your name:
Your email:
(will not appear)
Your comment:
(max. 1000 characters)
Are you human? (Sorry)