0


深不变散射网络分类

Classification with Deep Invariant Scattering Networks
课程网址: http://videolectures.net/nips2012_mallat_classification/  
主讲教师: Stéphane Mallat
开课单位: 巴黎高等理工学院
开课时间: 2013-01-16
课程语种: 英语
中文简介:

与统计决策理论相比,高维数据表示处于起步阶段。如何优化内核或所谓的特征向量?它们应该增加还是减少尺寸?令人惊讶的是,深度神经网络已成功构建了积累实验成功的内核。这次演讲表明,不变性成为理解高维表示和深层网络奥秘的中心概念。

类内可变性是大多数高维信号分类的诅咒。与之抗争意味着找到信息丰富的不变式。标准的数学不变式对于信号分类来说是不稳定的,或者没有足够的判别力。我们解释了卷积网络如何通过使用小波滤波器将数据散布在高维空间中的任何组(例如平移,旋转或频率换位)上计算稳定的信息不变性。除了组之外,还可以通过涉及稀疏性约束的无监督策略来学习流形上的不变式。将讨论应用程序并将其显示在图像和声音上。

课程简介: High-dimensional data representation is in a confused infancy compared to statistical decision theory. How to optimize kernels or so called feature vectors? Should they increase or reduce dimensionality? Surprisingly, deep neural networks have managed to build kernels accumulating experimental successes. This lecture shows that invariance emerges as a central concept to understand high-dimensional representations, and deep network mysteries. Intra-class variability is the curse of most high-dimensional signal classifications. Fighting it means finding informative invariants. Standard mathematical invariants are either non-stable for signal classification or not sufficiently discriminative. We explain how convolution networks compute stable informative invariants over any group such as translations, rotations or frequency transpositions, by scattering data in high dimensional spaces, with wavelet filters. Beyond groups, invariants over manifolds can also be learned with unsupervised strategies that involve sparsity constraints. Applications will be discussed and shown on images and sounds.
关 键 词: 统计决策; 深层网络; 高维空间
课程来源: 视频讲座网
数据采集: 2021-03-20:zyk
最后编审: 2021-05-14:yumf
阅读次数: 27