0


使用对抗正则化自动编码器学习深度网络表示

Learning Deep Network Representations with Adversarially Regularized Autoencoders
课程网址: http://videolectures.net/kdd2018_yu_deep_network_representations/  
主讲教师: Wenchao Yu
开课单位: 加州大学洛杉矶分校
开课时间: 2018-11-23
课程语种: 英语
中文简介:
网络表示学习(也称为网络嵌入)的问题出现在许多机器学习任务中,假设在顶点表示中存在少量变量,这些变量可以捕获原始网络结构的“语义”。大多数现有的网络嵌入模型,具有浅层或深层架构,从采样的顶点序列中学习顶点表示,使得低维嵌入保留了局部性和/或全局重建能力。然而,由于来自输入网络的采样序列的固有稀疏性,所得到的表示难以进行模型泛化。因此,解决该问题的理想方法是通过在采样序列上学习概率密度函数来生成顶点表示。然而,在许多情况下,低维流形中的这种分布可能并不总是具有解析形式。在本研究中,我们建议使用对抗正则化自动编码器(NetRA)来学习网络表示。NetRA学习平滑正则化的顶点表示,这些顶点表示通过联合考虑局部保持和全局重建约束来很好地捕捉网络结构。联合推理被封装在生成对抗性训练过程中,以规避显式先验分布的要求,从而获得更好的泛化性能。我们实证地证明了网络结构的关键财产的捕获情况以及NetRA在各种任务中的有效性,包括网络重建、链接预测和多标签分类。
课程简介: The problem of network representation learning, also known as network embedding, arises in many machine learning tasks assuming that there exist a small number of variabilities in the vertex representations which can capture the “semantics” of the original network structure. Most existing network embedding models, with shallow or deep architectures, learn vertex representations from the sampled vertex sequences such that the low-dimensional embeddings preserve the locality property and/or global reconstruction capability. The resultant representations, however, are difficult for model generalization due to the intrinsic sparsity of sampled sequences from the input network. As such, an ideal approach to address the problem is to generate vertex representations by learning a probability density function over the sampled sequences. However, in many cases, such a distribution in a low-dimensional manifold may not always have an analytic form. In this study, we propose to learn the network representations with adversarially regularized autoencoders (NetRA). NetRA learns smoothly regularized vertex representations that well capture the network structure through jointly considering both locality-preserving and global reconstruction constraints. The joint inference is encapsulated in a generative adversarial training process to circumvent the requirement of an explicit prior distribution, and thus obtains better generalization performance. We demonstrate empirically how well key properties of the network structure are captured and the effectiveness of NetRA on a variety of tasks, including network reconstruction, link prediction, and multi-label classification.
关 键 词: 使用对抗正则化; 正则化自动编码器; 学习深度网络表示; 机器学习任务
课程来源: 视频讲座网
数据采集: 2023-03-27:cyh
最后编审: 2023-03-27:cyh
阅读次数: 27