0


sparsifi阳离子梯度下降:有限制的等距性稀疏恢复的一种迭代算法

Gradient Descent with Sparsification: An Iterative Algorithm for Sparse Recovery with Restricted Isometry Property
课程网址: http://videolectures.net/icml09_garg_gdws/  
主讲教师: Rahul Garg
开课单位: 沃森研究中心
开课时间: 2009-08-26
课程语种: 英语
中文简介:
在本文中,我们提出了一种求解S稀疏向量X的算法,该算法可将平方误差y&减去;&phi;x 2降至最小,其中&phi;满足限制等距特性(rip)。我们的算法,称为梯度(梯度下降和稀疏)从一个任意的S稀疏X开始,并迭代更新为:X&larr;hs x+1&gamma;·;&phi;t(y&minus;&phi;x),其中&gamma;>;1是一个常量,hs将绝对值中除最大S坐标外的所有S坐标设置为零。我们表明,在迭代次数不变的情况下,等级计算系统y=&phi;x的正确s-稀疏解,其中&phi;满足等距常数&delta;2s<;1/3的条件。这是已知近线性时间算法的最一般条件。相比之下,已知多项式时间算法的最佳条件是&delta;2s<;&radic;2&minus;1。本文的一个重要贡献是分析了硬阈值函数hs如何作用于w.r.t.势y&负;&phi;x 2。与&gamma;=1对应的一种特殊的等级情况,称为迭代硬阈值(IHT),以前被证明在&delta;3s<;1/&radic;32时收敛。我们的matlab实现的grades out执行之前提出的算法,如子空间跟踪、stomp、omp和lasso,数量级。奇怪的是,我们的实验还发现了几个案例,其中L1正则化回归(lasso)失败,但分级确定了正确的解决方案。
课程简介: In this paper, we present an algorithm for finding an s-sparse vector x that minimizes the square-error ∥y − Φx∥ 2 where Φ satisfies the restricted isometry property (RIP). Our algorithm, called GraDeS (Gradient Descent with Sparsification) starts from an arbitrary s-sparse x and iteratively updates it as: x← Hsx + 1γ · Φt (y − Φx)where γ > 1 is a constant and Hs sets all but largest s coordinates in absolute value to zero. We show that GraDeS, in constant number of iterations, computes the correct s-sparse solution to the system y = Φx where Φ satisfies the condition that the isometric constant δ2s < 1/3. This is the most general condition for which, near-linear time algorithm is known. In comparison, the best condition under which any polynomial-time algorithm is known, is δ2s < √2 − 1. An important contribution of the paper is to analyze how the hard-thresholding function Hs acts w.r.t. the potential ∥y − Φx∥ 2 . A special case of GraDeS, corresponding to γ = 1, called Iterative Hard Thresholding (IHT), was previously shown to converge when δ3s < 1/√32. Our Matlab implementation of GraDeS out-performs previously proposed algorithms like Subspace Pursuit, StOMP, OMP, and Lasso by an order of magnitude. Curiously, our experiments also uncovered several cases where L1-regularized regression (Lasso) fails but GraDeS finds the correct solution.
关 键 词: 稀疏向量; 平方误差; 迭代常数计算系统
课程来源: 视频讲座网
最后编审: 2019-12-05:lxf
阅读次数: 58