《Foundation of Machine Learning [Part03]》.pdf
文本预览下载声明
Foundations of Machine Learning
Lecture 3
Mehryar Mohri
Courant Institute and Google Research
mohri@cims.nyu.edu
Learning Bounds for
Infinite Hypothesis Sets
Motivation
With an infinite hypothesis set H, the error bounds
of the previous lecture are not informative.
Is efficient learning from a finite sample possible
when H is infinite?
Our example of axis-aligned rectangles shows that
it is possible.
Can we reduce the infinite case to a finite set?
Project on finite samples?
Are there useful measures of complexity for
infinite hypothesis sets?
Mehryar Mohri - Foundations of Machine Learning page 3
This lecture
Rademacher complexity
Growth Function
VC dimension
Lower bound
Mehryar Mohri - Foundations of Machine Learning page 4
Rademacher Complexity
Definitions: let G be a family of functions mapping
from Z to [a, b] .
• Empirical Rademacher complexity of G :
m
1
R (G) = E sup σ g (z ) ,
S i i
σ g ∈G m i=1
where σ s are independent uniform random variables
i
taking values in {−1, +1} and S =(z , . . . , z ).
1 m
• Rademacher complexity of G :
Rm (G) = E [RS (G)].
S∼Dm
Mehryar Mohri - Foundations of Machine Learning page 5
Rademacher Complexity Bound
显示全部