0


可微的稀疏编码

Differentiable Sparse Coding
课程网址: http://videolectures.net/cmulls08_bradley_dsc/  
主讲教师: David Bradley
开课单位: 卡内基梅隆大学
开课时间: 信息不详。欢迎您在右侧留言补充。
课程语种: 英语
中文简介:
先前的研究已经表明,在生物学上看似可信的特征和经验上有用的特征可以通过与先验的稀疏编码找到,例如拉普拉斯(l1)促进稀疏性。我们展示了更平滑的先验如何能保持这些稀疏先验的优点,同时增加最大后验概率(MAP)估计的稳定性,使其更适用于预测问题。此外,我们还展示了如何利用隐式微分法有效地计算地图估计的导数。以这种方式可以区分的一个先例是kl正则化。我们证明了它在多种应用中的有效性,并发现在线优化kl正则化模型的参数可以显著提高预测性能。
课程简介: Prior work has shown that features which appear to be biologically plausible as well as empirically useful can be found by sparse coding with a prior such as a Laplacian (L1 ) that promotes sparsity. We show how smoother priors can preserve the benefits of these sparse priors while adding stability to the Maximum A-Posteriori (MAP) estimate that makes it more useful for prediction problems. Additionally, we show how to calculate the derivative of the MAP estimate efficiently with implicit differentiation. One prior that can be differentiated this way is KL-regularization. We demonstrate its effectiveness on a wide variety of applications, and find that online optimization of the parameters of the KL-regularized model can significantly improve prediction performance.
关 键 词: 计算机科学; 机器学习; 稀疏编码
课程来源: 视频讲座网
最后编审: 2019-12-12:cwx
阅读次数: 34