0


当精确推理难以解决时训练结构支持向量机

Training Structural SVMs when Exact Inference is Intractable
课程网址: http://videolectures.net/icml08_finley_tssvm/  
主讲教师: Thomas Finley
开课单位: 康奈尔大学
开课时间: 2008-08-28
课程语种: 英语
中文简介:
虽然判别训练(例如,CRF,结构SVM)对于机器翻译,图像分割和聚类具有很大的希望,但是这些应用的复杂推断需要使精确训练难以处理。这导致需要近似的训练方法。不幸的是,关于如何进行有效和有效的近似训练的知识是有限的。关注结构SVM,我们提供并探索两种不同类型的近似训练算法的算法,我们将其称为欠生成(例如,贪婪)和过度生成(例如,松弛)算法。我们提供了两种类型的近似训练结构SVM的理论和实证分析,侧重于完全连接的成对马尔可夫随机场。我们发现用过度生成方法训练的模型相对于欠生成方法具有理论上的优势,相对于它们的生成不足的兄弟来说是经验上强大的,并且放松训练模型有利于放松预测器的非分数预测。
课程简介: While discriminative training (e.g., CRF, structural SVM) holds much promise for machine translation, image segmentation, and clustering, the complex inference these applications require make exact training intractable. This leads to a need for approximate training methods. Unfortunately, knowledge about how to perform efficient and effective approximate training is limited. Focusing on structural SVMs, we provide and explore algorithms for two different classes of approximate training algorithms, which we call undergenerating (e.g., greedy) and overgenerating (e.g., relaxations) algorithms. We provide a theoretical and empirical analysis of both types of approximate trained structural SVMs, focusing on fully connected pairwise Markov random fields. We find that models trained with overgenerating methods have theoretic advantages over undergenerating methods, are empirically robust relative to their undergenerating brethren, and relaxed trained models favor non-fractional predictions from relaxed predictor
关 键 词: 判别训练; 机器翻译; 图像分割
课程来源: 视频讲座网
最后编审: 2019-04-18:cwx
阅读次数: 50