0


大规模学习和推理:我们用马尔可夫逻辑网络学到的东西

Large-Scale Learning and Inference: What We Have Learned with Markov Logic Networks
课程网址: http://videolectures.net/nipsworkshops09_domingos_lsliwwlmln/  
主讲教师: Pedro Domingos
开课单位: 华盛顿大学
开课时间: 2010-01-19
课程语种: 英语
中文简介:
马尔可夫逻辑允许紧凑地指定非常大且丰富的图形模型。目前用于马尔可夫逻辑的学习和推理算法可以定期处理具有数百万个变量,数十亿个特征,数千个潜在变量和强依赖性的模型。在本次演讲中,我将概述这些算法的主要思想,包括加权可满足性,具有确定性依赖性的MCMC,惰性推理,提升推理,关系切割平面,缩放共轭梯度,关系聚类和关系寻路。我还将讨论在开发这些算法的后续几代中所获得的经验教训以及为下一轮扩展提出的有希望的想法。
课程简介: Markov logic allows very large and rich graphical models to be compactly specified. Current learning and inference algorithms for Markov logic can routinely handle models with millions of variables, billions of features, thousands of latent variables, and strong dependencies. In this talk I will give an overview of the main ideas in these algorithms, including weighted satisfiability, MCMC with deterministic dependencies, lazy inference, lifted inference, relational cutting planes, scaled conjugate gradient, relational clustering and relational pathfinding. I will also discuss the lessons learned in developing successive generations of these algorithms and promising ideas for the next round of scaling up.
关 键 词: 马尔可夫; 图形模型; 推理算法
课程来源: 视频讲座网
最后编审: 2019-09-07:lxf
阅读次数: 58