0


跨域联合分布匹配的多视点对抗学习推理

Multi-view Adversarially Learned Inference for Cross-domain Joint Distribution Matching
课程网址: http://videolectures.net/kdd2018_du_adversarially_cross-domain/  
主讲教师: Changyin Du
开课单位: 中国科学院计算技术研究所
开课时间: 2018-11-23
课程语种: 英语
中文简介:
许多重要的数据挖掘问题可以建模为学习两个数据域之间的(双向)多维映射。基于生成对抗网络(GAN),特别是条件网络,跨域联合分布匹配是解决此类问题的一种越来越流行的方法。尽管已经取得了重大进展,但现有模型仍存在两个主要缺点,即需要大量配对训练样本和训练的不稳定性。在本文中,我们提出了一个多视图对抗性学习推理(ALI)模型,称为MALI,以解决这些问题。与学习直接域映射的常见实践不同,我们的模型依赖于两个域的共享潜在表示,并且可以生成任意数量的配对伪样本,从中受益的是,通常很少的配对样本(连同足够的未配对样本)足以学习好的映射。扩展香草ALI模型,我们设计了新的鉴别器来判断生成的样本(配对和未配对)的质量,并对我们的新配方进行理论分析。对图像翻译、图像到属性和属性到图像生成任务的实验表明,我们的半监督学习框架比现有的框架产生了显著的性能改进。跨模态检索的结果表明,我们基于潜在空间的方法可以以相对较快的速度实现具有竞争力的相似性搜索性能,
课程简介: Many important data mining problems can be modeled as learning a (bidirectional) multidimensional mapping between two data domains. Based on the generative adversarial networks (GANs), particularly conditional ones, cross-domain joint distribution matching is an increasingly popular kind of methods addressing such problems. Though significant advances have been achieved, there are still two main disadvantages of existing models, i.e., the requirement of large amount of paired training samples and the notorious instability of training. In this paper, we propose a multi-view adversarially learned inference (ALI) model, termed as MALI, to address these issues. Unlike the common practice of learning direct domain mappings, our model relies on shared latent representations of both domains and can generate arbitrary number of paired faking samples, benefiting from which usually very few paired samples (together with sufficient unpaired ones) is enough for learning good mappings. Extending the vanilla ALI model, we design novel discriminators to judge the quality of generated samples (both paired and unpaired), and provide theoretical analysis of our new formulation. Experiments on image translation, image-to-attribute and attribute-toimage generation tasks demonstrate that our semi-supervised learning framework yields significant performance improvements over existing ones. Results on cross-modality retrieval show that our latent space based method can achieve competitive similarity search performance in relative fast speed,
关 键 词: 数据挖掘问题; 学习两个数据域; 大量配对训练样本; 连同足够的未配对样本
课程来源: 视频讲座网
数据采集: 2023-01-30:cyh
最后编审: 2023-01-31:cyh
阅读次数: 24