0


哪一种监督学习方法效果最好?学习方法和指标的实证比较

Which Supervised Learning Method Works Best for What? An Empirical Comparison of Learning Methods and Metrics
课程网址: http://videolectures.net/solomon_caruana_wslmw/  
主讲教师: Rich Caruana
开课单位: 康奈尔大学
开课时间: 2007-02-25
课程语种: 英语
中文简介:
决策树是可理解的,但是它们的性能是否足够好,您应该使用它们?支持向量机取代了神经网络,还是神经网络仍然是回归的最佳选择,支持向量机是分类的最佳选择?提高利润最大化类似于支持向量机,但提高利润能与支持向量机竞争吗?如果它真的有竞争力,是像理论所暗示的那样,推动弱模型,还是推动强模型?装袋比增压更简单——相对于增压,装袋能累积多少?布雷曼说,随机森林比袋装好,而且也有助于提高。他是对的吗?那么,像逻辑回归、KNN和朴素贝叶斯这样的老朋友呢?他们是应该被归入历史书,还是应该填补重要的空白?在本文中,我们比较了10种受监督学习方法在9个标准上的性能:准确度、F分数、提升、精度/召回盈亏平衡点、ROC下面积、平均精度、平方误差、交叉熵和概率校准。结果表明,没有任何一种学习方法可以做到这一切,但是有些方法可以被修复,因此它们在所有性能指标中都表现得很好。特别地,我们展示了如何从最大边缘方法,如支持向量机,通过普拉特方法和等渗回归获得最佳概率。然后,我们描述了一种新的集成方法,将这十种学习方法中的选择模型结合起来,以获得更好的性能。虽然这些集成执行得非常好,但对于许多应用程序来说,它们太复杂了。我们将描述我们正在做什么来解决这个问题。最后,如果时间允许,我们将讨论九个性能指标是如何相互关联的,以及您可能应该(或不应该)使用哪一个。在本次讲座中,我将简要介绍学习方法和性能指标,以帮助非机器学习专家访问讲座。
课程简介: Decision trees are intelligible, but do they perform well enough that you should use them? Have SVMs replaced neural nets, or are neural nets still best for regression, and SVMs best for classification? Boosting maximizes margins similar to SVMs, but can boosting compete with SVMs? And if it does compete, is it better to boost weak models, as theory might suggest, or to boost stronger models? Bagging is simpler than boosting -- how well does bagging stack up against boosting? Breiman said Random Forests are better than bagging and as good as boosting. Was he right? And what about old friends like logistic regression, KNN, and naive bayes? Should they be relegated to the history books, or do they still fill important niches? \\ In this talk we compare the performance of ten supervised learning methods on nine criteria: Accuracy, F-score, Lift, Precision/Recall Break-Even Point, Area under the ROC, Average Precision, Squared Error, Cross-Entropy, and Probability Calibration. The results show that no one learning method does it all, but some methods can be "repaired" so that they do very well across all performance metrics. In particular, we show how to obtain the best probabilities from max margin methods such as SVMs and boosting via Platt's Method and isotonic regression. We then describe a new ensemble method that combines select models from these ten learning methods to yield much better performance. Although these ensembles perform extremely well, they are too complex for many applications. We'll describe what we're doing to try to fix that. Finally, if time permits, we'll discuss how the nine performance metrics relate to each other, and which of them you probably should (or shouldn't) use. \\ During this talk I'll briefly describe the learning methods and performance metrics to help make the lecture accessible to non-specialists in machine learning.
关 键 词: 计算机科学; 机器学习; 监督学习
课程来源: 视频讲座网
最后编审: 2020-06-03:魏雪琼(课程编辑志愿者)
阅读次数: 47