0


超越后悔最小化障碍:随机强凸优化的一个优化算法

Beyond the regret minimization barrier: an optimal algorithm for stochastic strongly-convex optimization
课程网址: http://videolectures.net/colt2011_kale_optimization/  
主讲教师: Satyen Kale
开课单位: 雅虎公司
开课时间: 2011-08-02
课程语种: 英语
中文简介:
给出了一种新的梯度Oracle模型随机强凸优化算法,该算法在T梯度更新后返回O(1/T)-近似解。这种收敛速度在梯度Oracle模型中是最优的。这提高了以前已知的最佳O(Log(t)/T速率,该速率是通过在批处理设置中应用带遗憾O(Log(t))的在线强凸优化算法获得的。我们通过证明在在线随机强凸优化设置中,任何算法都会对ω(log t))产生遗憾。这个下限即使在完全信息设置中也适用,它向算法显示的信息比仅梯度信息还要多。这表明,对于随机强凸优化,任何在线到批处理的转换本质上都是次优的。这是首次正式证明在线凸优化比批随机凸优化更困难。
课程简介: We give a novel algorithm for stochastic strongly-convex optimization in the gradient oracle model which returns an O(1/T)-approximate solution after T gradient updates. This rate of convergence is optimal in the gradient oracle model. This improves upon the previously known best rate of O(log(T)/T), which was obtained by applying an online strongly-convex optimization algorithm with regret O(log(T)) to the batch setting. We complement this result by proving that any algorithm has expected regret of Ω(log(T)) in the online stochastic strongly-convex optimization setting. This lower bound holds even in the full-information setting which reveals more information to the algorithm than just gradients. This shows that any online-to-batch conversion is inherently suboptimal for stochastic strongly-convex optimization. This is the first formal evidence that online convex optimization is strictly more difficult than batch stochastic convex optimization.
关 键 词: 计算机科学; 优化方法; 随机优化
课程来源: 视频讲座网
最后编审: 2020-01-13:chenxin
阅读次数: 34