首页自然科学
   首页数学
   首页函数论
0


论强盗与无导数随机凸优化的复杂性

On the Complexity of Bandit and Derivative-Free Stochastic Convex Optimization
课程网址: http://videolectures.net/colt2013_shamir_optimization/  
主讲教师: Ohad Shamir
开课单位: 魏茨曼科学研究所
开课时间: 2013-08-09
课程语种: 英语
中文简介:
近年来,以算法和性能上限的形式,利用强盗反馈(在学习社区中)或不具有梯度知识(在优化社区中)的随机凸优化问题已经受到很多关注。然而,对于这些问题的固有复杂性知之甚少,文献中的下限很少,特别是对于非线性函数。在本文中,我们研究了强盗和衍生自由设置中可达到的误差/遗憾,作为维度d和可用查询数量T的函数。我们提供了对强凸和平滑函数可达到的性能的精确表征,这也意味着更普遍问题的一个非平凡的下限。此外,我们证明在强盗和衍生自由设置中,所需的查询数量必须至少与维度成二次方。最后,我们证明了在二次函数的自然类中,在温和的假设下,即使没有梯度访问,也可以在T方面获得“快”O(1 / T)误差率。据我们所知,这是衍生自由随机设置中的第一个这样的速率,尽管以前的结果似乎暗示相反,但仍然存在。
课程简介: The problem of stochastic convex optimization with bandit feedback (in the learning community) or without knowledge of gradients (in the optimization community) has received much attention in recent years, in the form of algorithms and performance upper bounds. However, much less is known about the inherent complexity of these problems, and there are few lower bounds in the literature, especially for nonlinear functions. In this paper, we investigate the attainable error/regret in the bandit and derivative-free settings, as a function of the dimension d and the available number of queries T. We provide a precise characterization of the attainable performance for strongly-convex and smooth functions, which also imply a non-trivial lower bound for more general problems. Moreover, we prove that in both the bandit and derivative-free setting, the required number of queries must scale at least quadratically with the dimension. Finally, we show that on the natural class of quadratic functions, it is possible to obtain a “fast” O(1/T) error rate in terms of T, under mild assumptions, even without having access to gradients. To the best of our knowledge, this is the first such rate in a derivative-free stochastic setting, and holds despite previous results which seem to imply the contrary.
关 键 词: 非线性函数; 凸和平滑函数; 衍生自由随机设置
课程来源: 视频讲座网
最后编审: 2019-03-10:mmx
阅读次数: 110