0


可扩展的k‑意味着通过轻量级核心集进行集群

Scalable k‑Means Clustering via Lightweight Coresets
课程网址: http://videolectures.net/kdd2018_bachem_lightweight_coresets/  
主讲教师: Olivier Bachem
开课单位: 苏黎世联邦理工学院
开课时间: 2018-11-23
课程语种: 英语
中文简介:
核心集是数据集的紧凑表示,使得在核心集上训练的模型与在完整数据集上训练过的模型具有可证明的竞争性。因此,它们已经成功地用于将聚类模型扩展到大规模数据集。虽然现有的方法通常只允许乘法逼近误差,但我们提出了一种新的轻量级核心集概念,该概念允许乘法和加法误差。我们为k均值聚类以及软和硬Bregman聚类提供了一种构造轻量级核心集的单一算法。该算法比现有构造快得多,并行性令人尴尬,并且生成的核心集更小。我们进一步表明,所提出的方法自然地推广到统计k均值聚类,并且与现有结果相比,它可以用于计算经验风险最小化的较小摘要。在大量实验中,我们证明了所提出的算法在实践中优于现有的数据摘要策略。
课程简介: Coresets are compact representations of data sets such that models trained on a coreset are provably competitive with models trained on the full data set. As such, they have been successfully used to scale up clustering models to massive data sets. While existing approaches generally only allow for multiplicative approximation errors, we propose a novel notion of lightweight coresets that allows for both multiplicative and additive errors. We provide a single algorithm to construct lightweight coresets for k -means clustering as well as soft and hard Bregman clustering. The algorithm is substantially faster than existing constructions, embarrassingly parallel, and the resulting coresets are smaller. We further show that the proposed approach naturally generalizes to statistical k -means clustering and that, compared to existing results, it can be used to compute smaller summaries for empirical risk minimization. In extensive experiments, we demonstrate that the proposed algorithm outperforms existing data summarization strategies in practice.
关 键 词: 聚类模型; 风险最小化; 数据摘要
课程来源: 视频讲座网
数据采集: 2022-12-19:chenjy
最后编审: 2023-05-11:chenjy
阅读次数: 22