0


顺序和分解的NML模型

Sequential and factorized NML models
课程网址: http://videolectures.net/icml08_silander_sfn/  
主讲教师: Tomi Silander
开课单位: 赫尔辛基学校
开课时间: 2008-08-13
课程语种: 英语
中文简介:
目前,用于学习贝叶斯网络的最流行的模型选择标准是具有共轭先验的贝叶斯混合。最近报道了该方法对先前超参数的选择非常敏感。另一方面,一般模型选择标准AIC和BIC是通过渐近线推导出来的,并且它们的行为对于小样本量来说是次优的。在这项工作中,我们引入了一个新的有效评分标准,用于学习贝叶斯网络结构,即分解归一化最大似然。该分数没有可调参数,因此避免了贝叶斯分数的敏感性问题。新的评分方法还建议贝叶斯网络的参数化,该参数化基于条件归一化最大似然预测分布。
课程简介: Currently the most popular model selection criterion for learning Bayesian networks is the Bayesian mixture with a conjugate prior. This method has recently been reported to be very sensitive to the choice of prior hyper-parameters. On the other hand, the general model selection criteria, AIC and BIC are derived through asymptotics and their behavior is suboptimal for small sample sizes. In this work we introduce a new effective scoring criterion for learning Bayesian network structures, the factorized normalized maximum likelihood. This score features no tunable parameters thus avoiding the sensitivity problems of Bayesian scores. The new scoring method also suggests a parametrization of the Bayesian network that is based on the conditional normalized maximum likelihood predictive distribution.
关 键 词: 贝叶斯网络; 共轭先验; 超参数
课程来源: 视频讲座网
最后编审: 2021-01-15:yumf
阅读次数: 68