0


生成模型和概率的多任务判别估计

Multi-Task Discriminative Estimation for Generative Models and Probabilities
课程网址: http://videolectures.net/nipsworkshops09_jebara_mtdegmp/  
主讲教师: Tony Jebara
开课单位: 哥伦比亚大学
开课时间: 2010-03-26
课程语种: 英语
中文简介:
最大熵辨别是用于估计分布的方法,使得它们满足分类约束并执行准确预测。这些分布超过分类器的参数,例如,对数线性预测模型或生成模型的对数似然比。许多最终的优化问题是凸程序,有时只是简单的二次程序。在多任务设置中,许多任务可以获得若干歧视限制,这可能产生更好的歧视。如果涉及某些参数绑定(例如,通过多任务稀疏性假设),则该优点表现出来。使用新的变分界限,可以将多任务变量实现为(顺序)二次规划或独立辨别问题的顺序版本。在这些设置中,可以表明多任务区分仅需要不超过独立单任务区分的计算的不断增加。
课程简介: Maximum entropy discrimination is a method for estimating distributions such that they meet classification constraints and perform accurate prediction. These distributions are over parameters of a classifier, for instance, log-linear prediction models or log-likelihood ratios of generative models. Many of the resulting optimization problems are convex programs and sometimes just simple quadratic programs. In multi-task settings, several discrimination constraints are available from many tasks which potentially produce even better discrimination. This advantage manifests itself if some parameter tying is involved, for instance, via multi-task sparsity assumptions. Using new variational bounds, it is possible to implement the multitask variants as (sequential) quadratic programs or sequential versions of the independent discrimination problems. In these settings, it is possible to show that multi-task discrimination requires no more than a constant increase in computation over independent single-task discrimination.
关 键 词: 最大熵辨别; 分类器; 数线性预测模型
课程来源: 视频讲座网
最后编审: 2019-07-26:cwx
阅读次数: 67