开课单位--耶路撒冷希伯来大学

21
The Markovian Patch-Occupancy (MPO) framework in Community Ecology[群落生态学中的马尔科夫修补程序占用 (MPO) 框架]
  Omri Allouch(耶路撒冷希伯来大学) Ecological research over the years has pointed to the existence of a wide spectrum of 'semi-universal' patterns of species diversity, found over very ...
热度:76

22
Trading Regret Rate for Computational Efficiency in Online Learning with Limited Feedback[有限反馈在线学习中计算效率的交易后悔率]
  Shai Shalev-Shwartz(耶路撒冷希伯来大学) We study low regret algorithms for online learning with limited feedback, where there is an additional constraint on the computational power of the le...
热度:14

23
Convergent Message-Passing Algorithms for Inference over General Graphs with Convex Free Energies[收敛的消息传递算法推理在一般图凸自由能量]
  Tamir Hazan(耶路撒冷希伯来大学) Inference problems in graphical models can be represented as a constrained optimization of a free energy function. It is known that when the Bethe fr...
热度:42

24
The price of bandit information in multiclass online classification[多类在线分类中Bandit信息的价值]
  Amit Daniely(耶路撒冷希伯来大学) We consider two scenarios of multiclass online learning of a hypothesis class . In the full information scenario, the learner is exposed to instances ...
热度:53

25
A simple feature extraction for high dimensional image representations[用于高维图像表示的简单特征提取]
  Amit Gruber(耶路撒冷希伯来大学) We investigate a method to find local clusters in low dimensional subspaces of high dimensional data, e.g. in high dimensional image descriptions. Usi...
热度:42

26
What is the Optimal Number of Features? A learning theoretic perspective[功能的最佳数量是多少?从学习理论视角解读]
  Amir Navot(耶路撒冷希伯来大学) In this paper we discuss the problem of feature selection for supervised learning from the standpoint of statistical machine learning. We inquire what...
热度:79

27
The Sample-Computational Tradeoff[样本计算权衡 ]
  Shai Shalev-Shwartz(耶路撒冷希伯来大学 ) When analyzing the error of a learning algorithm, it is common to decompose the error into approximation error (measuring how well the hypothesis cla...
热度:51

28
Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization[正则化损失最小化的随机双坐标上升法 ]
  Shai Shalev-Shwartz(耶路撒冷希伯来大学) Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to the...
热度:162

29
What cannot be learned with Bethe Approximations[用贝斯近似法不能做到的东西]
  Amir Globerson(耶路撒冷希伯来大学 ) We address the problem of learning the parameters in graphical models when inference is intractable. A common strategy in this case is to replace the ...
热度:83

30
Old and New algorithm for Blind Deconvolution[盲反卷积的新旧算法 ]
  Yair Weiss(耶路撒冷希伯来大学 ) I will discuss blind deconvolution algorithms that have been successfully used in the field of communications for several decades and how they can be ...
热度:60