稀疏的指数加权和朗之万的蒙特卡罗Sparse Exponential Weighting and Langevin Monte-Carlo |
|
课程网址: | http://videolectures.net/smls09_tsybakov_sewalmc/ |
主讲教师: | Alexandre Tsybakov |
开课单位: | 皮埃尔和玛丽·居里大学 |
开课时间: | 2009-05-06 |
课程语种: | 英语 |
中文简介: | 统计估计在自适应非参数估计、稀疏约束下的估计集合和估计等几种情况下的性能可以用稀疏性-预言不等式(SOI)来评价预测风险。其中一个挑战是找到在字典上的最小假设下获得最尖锐的SOI的估计量。适用于稀疏性场景的估计方法,如Lasso、Dantzig选择器或它们的修改,可以很容易地在很大的问题维度上实现,但它们的性能受到字典的严格限制。当字典的元素不是近似不相关的时候,这种方法就失败了。这有点——真是令人不满意,因为众所周知,BIC方法在字典中没有任何假设的情况下,享有更好的SOI。然而,BIC方法是NP困难的。本文将重点讨论稀疏指数加权,这是一种新的稀疏恢复技术,旨在实现理论最佳性和计算效率之间的折衷。该方法是基于指数权重加总,利用重尾稀疏优先权。基于SOI的稀疏指数加权的理论性能与BIC相当,在某些方面甚至更好。不需要对字典进行假设。同时,我们证明了该方法对于问题的较大维是计算可行的。证明了朗格文蒙特卡罗算法可以成功地用于稀疏指数加权估计。数值实验验证了LMC的快速收敛性,并证明了所得到的估计量的良好性能。这是与Arnak Dallayan的联合工作。 |
课程简介: | The performance of statistical estimators in several scenarios, such as adaptive nonparametric estimation, aggregation of estimators and estima- tion under the sparsity constraint can be assessed in terms of sparsity oracle inequalities (SOI) for the prediction risk. One of the challenges is to find estimators that attain the sharpest SOI under minimal assumptions on the dictionary. Methods of estimation adapted to the sparsity scenario like the Lasso, the Dantzig selector or their modifications, can be easily realized for very large dimensions of the problem but their performance is conditioned by severe restrictions on the dictionary. Such methods fail when the ele- ments of the dictionary are not approximately non-correlated. This is some- what unsatisfactory, since it is known that the BIC method enjoys better SOI without any assumption on the dictionary. However, the BIC method is NP-hard. This talk will focus on Sparse Exponential Weighting, a new tech- nique of sparse recovery aiming to realize a compromise between theoretical optimality and computational efficiency. The method is based on aggrega- tion with exponential weights using a heavy-tailed sparsity favoring prior. The theoretical performance of Sparse Exponential Weighting in terms of SOI is comparable with that of the BIC and is even better in some aspects. No assumption on the dictionary is needed. At the same time, we show that the method is computationally feasible for relatively large dimensions of the problem. We prove that Langevin Monte-Carlo (LMC) algorithms can be successfully used for computing Sparse Exponential Weighting estimators. Numerical experiments confirm fast convergence properties of the LMC and demonstrate nice performance of the resulting estimators. This is a joint work with Arnak Dalalyan. |
关 键 词: | 计算机科学; 机器学习; 蒙特卡罗方法 |
课程来源: | 视频讲座网公开课 |
最后编审: | 2020-06-04:dingaq |
阅读次数: | 45 |