0


在线密度估计中的最大似然与序贯归一化最大似然

Maximum Likelihood vs. Sequential Normalized Maximum Likelihood in On-line Density Estimation
课程网址: http://videolectures.net/colt2011_kotlowski_maximum/  
主讲教师: Wojciech Kotlowski
开课单位: 波兰波兹南工业大学
开课时间: 2011-08-02
课程语种: 英语
中文简介:
本文考虑了对数损失(在线密度估计)的序列预测问题。我们首先分析了最大似然策略的后悔。我们发现该策略(1)次优,(2)需要一个关于数据序列有界性的附加假设。然后我们证明,这两个问题都可以通过将当前预测的结果添加到最大似然计算中,然后对分布进行归一化来解决。用这种方法得到的策略在文献中称为序列归一化极大似然策略或最后一步极小极大策略。我们第一次证明,对于一般指数族,遗憾的边界是熟悉的(k=2) log n,因此最优值为O(1)。我们也展示了贝叶斯策略与Je reys先验的关系。
课程简介: The paper considers sequential prediction of individual sequences with log loss (online density estimation) using an exponential family of distributions. We first analyze the regret of the maximum likelihood ("follow the leader") strategy. We find that this strategy is (1) suboptimal and (2) requires an additional assumption about boundedness of the data sequence. We then show that both problems can be be addressed by adding the currently predicted outcome to the calculation of the maximum likelihood, followed by normalization of the distribution. The strategy obtained in this way is known in the literature as the sequential normalized maximum likelihood or last-step minimax strategy. We show for the fi rst time that for general exponential families, the regret is bounded by the familiar (k=2) log n and thus optimal up to O(1). We also show the relationship to the Bayes strategy with Je reys' prior.
关 键 词: 对数损失; 序列预测; 线密度估计; 最大似然; 贝叶斯公式
课程来源: 视频讲座网
最后编审: 2019-10-17:cwx
阅读次数: 29