0


基于近似映射推理的参数学习

Parameter Learning Using Approximate MAP Inference
课程网址: http://videolectures.net/nipsworkshops09_kumar_pluamapi/  
主讲教师: Pawan Kumar Mudigonda
开课单位: 牛津大学
开课时间: 2010-01-19
课程语种: 英语
中文简介:
近年来,机器学习已经开发出一系列用于参数学习的算法,其避免估计分区函数,而是依赖于精确的近似MAP推断。在此框架内,我们考虑两个新主题。在第一部分中,我们讨论了半监督场景中的参数学习。具体来说,我们专注于基于区域的场景分割模型,该模型根据其底层区域(一组提供辨别特征的连接像素)及其语义标签(例如天空,草地或前景)来解释图像。虽然很容易为训练图像的像素获得(部分)地面实况标记,但是人类注释器不可能为我们提供最佳区域集合(那些导致最具辨别力的特征)。为了解决这个问题,我们开发了一种新的迭代MAP推理算法,该算法使用凸松弛从大字典中选择最佳区域子集。我们使用我们的算法“完成”地面真实标记(即推断区域),这使我们能够采用非常成功的最大边际训练制度。我们将我们的方法与最先进的方法进行比较,并展示了显着的改进。在第二部分中,我们讨论了基于对比目标的一般对数线性模型的新学习框架。对比性目标考虑一组“有趣”的分配,并试图以牺牲其他有趣的分配为代价来提高正确实例化的概率。与我们的方法相反,诸如伪似然和对比差异的相关方法仅将正确的实例化与附近的实例进行比较,当存在远离正确的实例的高得分实例时,这可能是有问题的。我们提出了我们方法的一些理论性质和实际优势,包括仅使用(近似)MAP推理学习对数线性模型的能力。我们的方法的理论属性和实际优势,包括仅使用(近似)MAP推理学习对数线性模型的能力。我们还展示了将我们的方法应用于一些简单的合成示例的结果,其中它明显优于伪似然。
课程简介: In recent years, machine learning has seen the development of a series of algorithms for parameter learning that avoid estimating the partition function and instead, rely on accurate approximate MAP inference. Within this framework, we consider two new topics. In the first part, we discuss parameter learning in a semi-supervised scenario. Specifically, we focus on a region-based scene segmentation model that explains an image in terms of its underlying regions (a set of connected pixels that provide discriminative features) and their semantic labels (such as sky, grass or foreground). While it is easy to obtain (partial) ground-truth labeling for the pixels of a training image, it is not possible for a human annotator to provide us with the best set of regions (those that result in the most discriminative features). To address this issue, we develop a novel iterative MAP inference algorithm which selects the best subset of regions from a large dictionary using convex relaxations. We use our algorithm to "complete" the ground-truth labeling (i.e. infer the regions) which allows us to employ the highly successful max-margin training regime. We compare our approach with the state of the art methods and demonstrate significant improvements. In the second part, we discuss a new learning framework for general log-linear models based on contrastive objectives. A contrastive objective considers a set of "interesting" assignments and attempts to push up the probability of the correct instantiation at the expense of the other interesting assignments. In contrast to our approach, related methods such as pseudo-likelihood and contrastive divergence compare the correct instantiation only to nearby instantiations, which can be problematic when there is a high-scoring instantiation far away from the correct one. We present some of the theoretical properties and practical advantages of our method, including the ability to learn a log-linear model using only (approximate) MAP inference. We the theoretical properties and practical advantages of our method, including the ability to learn a log-linear model using only (approximate) MAP inference. We also show results of applying our method to some simple synthetic examples, where it significantly outperforms pseudo-likelihood.
关 键 词: 机器学习; 参数学习; 推理算法
课程来源: 视频讲座网
最后编审: 2019-09-07:lxf
阅读次数: 69