深度学习随机矩阵理论模型_v0.1.pptx
文本预览下载声明
深度学习的随机矩阵理论模型;;神经网络将许多单一的神经元连接在一起
一个神经元的输出作为另一个神经元的输入
多层神经网络模型可以理解为多个非线性函数“嵌套”
多层神经网络层数可以无限叠加
具有无限建模能力, 可以拟合任意函数
;Sigmoid
Tanh
Rectified linear units (ReLU)
;层数逐年增加;Features are learned rather than hand-crafted
More layers capture more invariances
More data to train deeper networks
More computing (GPUs)
Better regularization: Dropout
New nonlinearities
Max pooling, Rectified linear units (ReLU)
Theoretical understanding of deep networks remains shallow;Experimental Neuroscience uncovered:
Neural architecture of Retina/LGN/V1/V2/V3/ etc
Existence of neurons with weights and activation functions (simple cells)
Pooling neurons (complex cells)
All these features are somehow present in Deep Learning systems;Olshausen and Field demonstrated that receptive fields learned from image patches.
Olshausen and Field showed that optimization process can drive learning image representations.
;Olshausen-Field representations bear strong resemblance to defined mathematical objects from harmonic analysis wavelets, ridgelets, curvelets.
Harmonic analysis: long history of developing optimal representations via optimization
Research in 1990s: Wavelets etc are optimal sparsifying transforms for certain classes of images
;Class prediction rule can be viewed as function f(x) of high-dimensional argument
Curse of Dimensionality
Traditional theoretical obstacle to high-dimensional approximation
Functions of high dimensional x can wiggle in too many dimensions to be learned from finite datasets;Approximation theory
Perceptrons and multilayer feedforward networks are universal approximators: Cybenko ’89, Hornik ’89, Hornik ’91, Barron ‘93
Optimization theory
No spurious local optima for linear networks: Baldi Hornik ’89
Stuck in local minima: Brady ‘89
Stuck in local minima, but convergence guarantees for linearly separable data: Gori Tesi ‘92
Manifold of spurious local optima: Frasconi ’97;Invariance, stability, and learning theory
Scattering networks: Bruna ’11, Bruna ’13, Mallat ’13
Defor
显示全部