0


多任务学习

Multi-task Learning
课程网址: http://videolectures.net/roks2013_pontil_learning/  
主讲教师: Massimiliano Pontil
开课单位: 伦敦大学
开课时间: 2013-08-26
课程语种: 英语
中文简介:

标准机器学习方法的一个基本局限性是准备好的通用化所需的大量训练样本所产生的成本。多任务学习提供了一种可能的补救方法:在许多情况下,尽管单个样本量很小,但有一些样本代表着大量学习任务(线性回归问题),它们共享一些约束或生成属性。如果此属性足够简单,则尽管单个任务的样本量很小,也应该可以更好地学习各个任务。在本演讲中,我将回顾广泛的多任务学习方法,这些方法鼓励回归向量的低维表示。我将描述解决基本优化问题的技术,并对这些学习方法的泛化性能进行分析,以证明在特定条件下多任务学习的优越性。

课程简介: A fundamental limitation of standard machine learning methods is the cost incurred by the preparation of the large training samples required for good generalization. A potential remedy is o ffered by multi-task learning: in many cases, while individual sample sizes are rather small, there are samples to represent a large number of learning tasks (linear regression problems), which share some constraining or generative property. If this property is suficiently simple it should allow for better learning of the individual tasks despite their small individual sample sizes. In this talk I will review a wide class of multi-task learning methods which encourage low-dimensional representations of the regression vectors. I will describe techniques to solve the underlying optimization problems and present an analysis of the generalization performance of these learning methods which provides a proof of the superiority of multi-task learning under specific conditions.
关 键 词: 机器学习; 基本优化; 回归向量
课程来源: 视频讲座网
数据采集: 2020-11-05:zyk
最后编审: 2020-11-05:zyk
阅读次数: 54