深不变散射网络的分类Classification with Deep Invariant Scattering Networks |
|
课程网址: | http://videolectures.net/nips2012_mallat_classification/ |
主讲教师: | Stéphane Mallat |
开课单位: | 法国理工学院 |
开课时间: | 2013-01-16 |
课程语种: | 英语 |
中文简介: | 与统计决策理论相比,高维数据表示尚处于混乱的初期。如何优化内核或所谓的特征向量?它们应该增加还是减少维度?令人惊讶的是,深度神经网络已经成功地构建了积累了实验成功的内核。这个讲座表明不变性是理解高维表示和深层网络奥秘的核心概念。类内变异性是大多数高维信号分类的诅咒。战斗意味着找到信息不变量。标准数学不变量对于信号分类是不稳定的,或者没有足够的识别性。我们解释卷积网络如何通过小波滤波器在高维空间中散射数据,在任何群上计算稳定的信息不变量,如平移、旋转或频率变换。除了群之外,流形上的不变量也可以通过涉及稀疏约束的无监督策略来学习。应用程序将在图像和声音上讨论和显示。 |
课程简介: | High-dimensional data representation is in a confused infancy compared to statistical decision theory. How to optimize kernels or so called feature vectors? Should they increase or reduce dimensionality? Surprisingly, deep neural networks have managed to build kernels accumulating experimental successes. This lecture shows that invariance emerges as a central concept to understand high-dimensional representations, and deep network mysteries. Intra-class variability is the curse of most high-dimensional signal classifications. Fighting it means finding informative invariants. Standard mathematical invariants are either non-stable for signal classification or not sufficiently discriminative. We explain how convolution networks compute stable informative invariants over any group such as translations, rotations or frequency transpositions, by scattering data in high dimensional spaces, with wavelet filters. Beyond groups, invariants over manifolds can also be learned with unsupervised strategies that involve sparsity constraints. Applications will be discussed and shown on images and sounds. |
关 键 词: | 高维数据; 优化内核; 卷积网络计算 |
课程来源: | 视频讲座网 |
最后编审: | 2020-06-02:毛岱琦(课程编辑志愿者) |
阅读次数: | 88 |