0


神经网络的实际变分推理

Practical Variational Inference for Neural Networks
课程网址: http://videolectures.net/nips2011_graves_networks/  
主讲教师: Alex Graves
开课单位: 多伦多大学
开课时间: 2012-09-06
课程语种: 英语
中文简介:
变分法是神经网络贝叶斯推理的一种可处理的近似方法。然而,目前提出的方法只适用于一些简单的网络体系结构。本文介绍了一种易于实现的随机变分方法(或等效的最小描述长度损失函数),该方法可应用于大多数神经网络。在此过程中,它从一个变分的角度重新审视了几个常见的正则化器。它还提供了一种简单的修剪启发式方法,既可以大幅减少网络权重的数量,又可以改进泛化。实验结果为一个层次化的多维递归神经网络在语音语料库中的应用提供了依据。
课程简介: Variational methods have been previously explored as a tractable approximation to Bayesian inference for neural networks. However the approaches proposed so far have only been applicable to a few simple network architectures. This paper introduces an easy-to-implement stochastic variational method (or equivalently, minimum description length loss function) that can be applied to most neural networks. Along the way it revisits several common regularisers from a variational perspective. It also provides a simple pruning heuristic that can both drastically reduce the number of network weights and lead to improved generalisation. Experimental results are provided for a hierarchical multidimensional recurrent neural network applied to the TIMIT speech corpus.
关 键 词: 神经网络; 计算机科学; 机器学习; 贝叶斯学习
课程来源: 视频讲座网
最后编审: 2020-06-02:毛岱琦(课程编辑志愿者)
阅读次数: 70