0


降秩隐马尔可夫模型

Reduced-rank hidden Markov models
课程网址: http://videolectures.net/aistats2010_boots_rrhm/  
主讲教师: Byron Boots
开课单位: 卡内基梅隆大学
开课时间: 2011-08-29
课程语种: 英语
中文简介:
Hsu等人(2009)最近提出了一种高效、精确的隐马尔科夫模型谱学习算法(HMMs)。在本文中,我们放松了他们的假设,证明了一个更严格的有限样本误差约束的情况下,降低秩HMMs,即。,低秩变换矩阵的HMMs。由于rank-k RR-HMMs是一个比k-state HMMs更大的模型类,同时具有同等的工作效率,这种放松大大增加了学习算法的范围。此外,我们将算法推广到需要多个观测值来消除歧义状态的模型,以及发出多元实值观测值的模型。最后,我们证明了学习预测状态表示的一致性,这是一个更大的模型类。在合成数据和一个玩具视频上的实验,以及在困难的机器人视觉数据上的实验,产生了精确的模型,在仿真质量和预测精度上都优于其他选择。
课程简介: Hsu et al.(2009) recently proposed an efficient, accurate spectral learning algorithm for Hidden Markov Models (HMMs). In this paper we relax their assumptions and prove a tighter finite-sample error bound for the case of Reduced-Rank HMMs, i.e., HMMs with low-rank transition matrices. Since rank-k RR-HMMs are a larger class of models than k-state HMMs while being equally efficient to work with, this relaxation greatly increases the learning algorithm's scope. In addition, we generalize the algorithm and bounds to models where multiple observations are needed to disambiguate state, and to models that emit multivariate real-valued observations. Finally we prove consistency for learning Predictive State Representations, an even larger class of models. Experiments on synthetic data and a toy video, as well as on difficult robot vision data, yield accurate models that compare favorably with alternatives in simulation quality and prediction accuracy.
关 键 词: 降秩; 马尔可夫模型; 低秩变换矩阵
课程来源: 视频讲座网
最后编审: 2020-09-24:dingaq
阅读次数: 37