0


论学习理论中的最优估计

On Optimal Estimators in Learning Theory
课程网址: http://videolectures.net/mlss05us_temlyakov_oelt/  
主讲教师: Vladimir Temlyakov
开课单位: 南卡罗来纳大学
开课时间: 2007-02-25
课程语种: 英语
中文简介:
本讲座讨论了Cucker和Smale制定的监督学习中的一些问题。监督学习或从示例中学习是指基于输入xi和输出yi,i = 1,...,m的可用数据建立的过程,该函数最好地表示X中输入x之间的关系。目标是在给定数据z:=((x1,y1),...,(xm,ym))的基础上找到一个估计量fz,其近似于定义的回归函数fp在Z = X x Y.我们假设(xi,yi),i = 1,...,m是独立的并且根据p分布。在这个问题的数学公式中有几个重要的成分。我们遵循已成为近似理论标准的方法,并已在最近的论文中使用。在这种方法中,我们首先选择一个函数类W(一个假设空间H)来使用。选择W级后,我们有以下两种方法。第一个是基于研究将fp的L2(px)投影fW:=(fp)W近似到W的想法。这里,px是边际概率测量。此设置称为不正确的功能学习问题或投影学习问题。在这种情况下,我们不假设回归函数fp来自特定(例如,平滑)类函数。第二种方法是基于W中的假设fp。这种设置被称为正确的函数学习问题。例如,我们可以假设fp具有一定的平滑度。我们将在两种设置中给出一些上下估计值。
课程简介: This talk addresses some problems of supervised learning in the setting formulated by Cucker and Smale. Supervised learning, or learning from examples, refers to a process that builds on the base of available data of inputs xi and outputs yi, i=1,...,m, a function that best represents the relation between the inputs x in X and the corresponding outputs y in Y. The goal is to find an estimator fz on the base of given data z := ((x1,y1),...,(xm,ym)) that approximates well the regression function fp defined on Z=X x Y. We assume that (xi,yi), i=1,...,m are independent and distributed according to p. There are several important ingredients in the mathematical formulation of this problem. We follow the way that has become standard in approximation theory and has been used in recent papers. In this approach we first choose a function class W (a hypothesis space H) to work with. After selecting a class W we have the following two ways to go. The first one is based on the idea of studying approximation of the L2(px) projection fW := (fp)W of fp onto W. Here, px is the marginal probability measure. This setting is known as the improper function learning problem or the projection learning problem. In this case we do not assume that the regression function fp comes from a specific (say, smoothness) class of functions. The second way is based on the assumption fp in W. This setting is known as the proper function learning problem. For instance, we may assume that fp has some smoothness. We will give some upper and lower estimates in both settings.
关 键 词: 监督学习; 回归函数; 近似理论
课程来源: 视频讲座网
最后编审: 2019-07-10:lxf
阅读次数: 42