0


让贝斯睡觉

Putting Bayes to sleep
课程网址: http://videolectures.net/machine_koolen_bayes/  
主讲教师: Wouter M. Koolen
开课单位: 中华科技中心公司
开课时间: 2013-06-14
课程语种: 英语
中文简介:
我们考虑从一组模型作为输入给出预测的顺序预测算法。如果数据的性质随着时间的推移而变化,因为不同的模型可以很好地预测数据的不同部分,那么通常通过将每一轮中的权重混合到初始先前的一些比特(类似弱启动)来实现自适应性。 。但是,如果每个细分中的优势模型来自一个小的子集,即数据很可能由之前预测的模型很好地预测呢?奇怪的是,通过混合所有过去的后代来实现这种“稀疏复合模型”。这种自引用更新方法相当特殊,但它很有效并且在许多自然数据集上提供了优越的性能。同样重要的是因为它引入了长期记忆:任何过去表现良好的模型都可以快速恢复。虽然贝叶斯解释可以在最初的先验中找到混合,但是在过去的后验中没有贝叶斯解释用于混合。我们在在线学习文献的“专家”框架之上构建,为Mixing Past Posteriors提供适当的贝叶斯基础。我们将我们的方法应用于经过充分研究的多任务学习问题,并获得一个新的有趣的有效更新,实现了明显更好的约束。
课程简介: We consider sequential prediction algorithms that are given the predictions from a set of models as inputs. If the nature of the data is changing over time in that different models predict well on different segments of the data, then adaptivity is typically achieved by mixing into the weights in each round a bit of the initial prior (kind of like a weak restart). However, what if the favored models in each segment are from a small subset, i.e. the data is likely to be predicted well by models that predicted well before? Curiously, fitting such ''sparse composite models'' is achieved by mixing in a bit of all the past posteriors. This self-referential updating method is rather peculiar, but it is efficient and gives superior performance on many natural data sets. Also it is important because it introduces a long-term memory: any model that has done well in the past can be recovered quickly. While Bayesian interpretations can be found for mixing in a bit of the initial prior, no Bayesian interpretation is known for mixing in past posteriors. We build atop the ''specialist'' framework from the online learning literature to give the Mixing Past Posteriors update a proper Bayesian foundation. We apply our method to a well-studied multitask learning problem and obtain a new intriguing efficient update that achieves a significantly better bound.
关 键 词: 顺序预测; 弱启动; 权重混合
课程来源: 视频讲座网
最后编审: 2019-05-15:cwx
阅读次数: 63