通过耦合公共和私有表示的选择性多任务学习Selective Multitask Learning by Coupling Common and Private Representations |
|
课程网址: | http://videolectures.net/lms08_madrid_sanchez_smtl/ |
主讲教师: | Jaisiel Madrid-Sanchez |
开课单位: | 卡洛斯三世马德里大学 |
开课时间: | 2008-12-20 |
课程语种: | 英语 |
中文简介: | 在这篇文章中,我们讨论了在多任务学习中选择性地传递知识的问题用于分类。我们认为选择性转移是一个有趣的框架,因为任务不是真正密切相关的传统多任务方法变得不是最理想的。我们研究了两个多任务学习框架,我们开发了上述选择性转移范例。这两个场景对应于构建从示例到输出空间的映射的两种方式。 在第一个天真场景中,假设是从输入空间X到输出空间Y的直接映射。在第二个假设空间中,整个假设空间通过级联映射到中间表示空间以更复杂的方式构造。 第一种情况下的学习方法的例子是具有线性内核和单层感知器(SLP)的支持向量机(SVM)。第二类学习方法是c(MLP)和非线性SVM。 |
课程简介: | In this contribution we address the problem of selective transfer of knowledge in multitask learning for classification. We consider selective transfer an interesting framework since when tasks are not truly closely related traditional multitask approaches become suboptimal. We study two multitask learning frameworks where we develop the aforementioned selective transfer paradigm. The two scenarios correspond to two ways of constructing the mapping from examples to output space. In the first naive scenario, the hypothesis is a direct mapping from the input space X onto the output space Y. In the second one, the overall hypothesis space is constructed in a more sophisticate way through a cascade of mappings into intermediate representation spaces. Examples of learning methods under the first scenario are Support Vector Machines (SVM) with linear kernel and Single Layer Perceptrons (SLP). Learning methods under the second type are Multi Layer Perceptrons (MLP) and non linear SVMs. |
关 键 词: | 多任务学习; 直接映射; 支持向量机; 支持向量机 |
课程来源: | 视频讲座网 |
最后编审: | 2019-05-15:cjy |
阅读次数: | 26 |