0


复杂性推理学习偏差

Inference Complexity as Learning Bias
课程网址: http://videolectures.net/cmulls08_domingos_icl/  
主讲教师: Pedro Domingos
开课单位: 华盛顿大学
开课时间: 2009-01-15
课程语种: 英语
中文简介:
图形模型通常是在不考虑使用它们进行推理的成本的情况下学习的。因此,即使学习了一个好的模型,它也可能在预测方面表现不佳,因为它需要近似推理。我们提出了另一种选择:学习模型具有分数函数,直接惩罚推理成本。具体来说,我们学习算术电路,对电路中的边数进行惩罚(其中推理成本是线性的)。我们的算法相当于通过贪婪地分割条件分布来学习具有特定上下文独立性的贝叶斯网络,在每个步骤中,通过将生成的网络编译成算术电路并使用其大小作为惩罚来对候选对象进行评分。我们将展示如何有效地完成这项工作,而不必为每个候选对象从头编译电路。在多个现实领域的实验表明,我们的算法能够学习具有非常大树宽的可跟踪模型,并且比标准上下文特定的贝叶斯网络学习者在更短的时间内获得更精确的预测。(与丹尼尔·洛德合作)
课程简介: Graphical models are usually learned without regard to the cost of doing inference with them. As a result, even if a good model is learned, it may perform poorly at prediction, because it requires approximate inference. We propose an alternative: learning models with a score function that directly penalizes the cost of inference. Specifically, we learn arithmetic circuits with a penalty on the number of edges in the circuit (in which the cost of inference is linear). Our algorithm is equivalent to learning a Bayesian network with context-specific independence by greedily splitting conditional distributions, at each step scoring the candidates by compiling the resulting network into an arithmetic circuit, and using its size as the penalty. We show how this can be done efficiently, without compiling a circuit from scratch for each candidate. Experiments on several real-world domains show that our algorithm is able to learn tractable models with very large treewidth, and yields more accurate predictions than a standard context-specific Bayesian network learner, in far less time. (Joint work with Daniel Lowd.)
关 键 词: 贝叶斯网络; 图形模型; 电路
课程来源: 视频讲座网
最后编审: 2021-02-10:nkq
阅读次数: 36