0


神经网络对图数据的对抗攻击

Adversarial Attacks on Neural Networks for Graph Data
课程网址: http://videolectures.net/kdd2018_zuegner_adversarial_networks/  
主讲教师: Daniel Zügner
开课单位: 慕尼黑大学
开课时间: 2018-11-23
课程语种: 英语
中文简介:
图的深度学习模型在节点分类任务中取得了很好的性能。尽管它们扩散,但目前还没有研究它们对对抗性攻击的鲁棒性。然而,在可能被使用的领域,例如网络,对手是常见的。图的深度学习模型是否容易被愚弄?在这项工作中,我们介绍了对属性图的对抗性攻击的第一项研究,特别关注利用图卷积思想的模型。除了测试时的攻击,我们还处理更具挑战性的中毒/致病攻击,这些攻击集中在机器学习模型的训练阶段。我们针对节点的特征和图结构生成对抗性扰动,因此考虑了实例之间的依赖性。此外,我们通过保留重要的数据特征来确保扰动保持不可察觉。为了处理潜在的离散域,我们提出了一种利用增量计算的有效算法Nettack。我们的实验研究表明,即使只执行少量扰动,节点分类的准确性也会显著下降。更重要的是,我们的攻击是可转移的:学习到的攻击推广到其他最先进的节点分类模型和无监督方法,即使只给出关于图的有限知识,也同样是成功的。
课程简介: Deep learning models for graphs have achieved strong performance for the task of node classification. Despite their proliferation, currently there is no study of their robustness to adversarial attacks. Yet, in domains where they are likely to be used, e.g. the web, adversaries are common. Can deep learning models for graphs be easily fooled? In this work, we introduce the first study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph convolutions. In addition to attacks at test time, we tackle the more challenging class of poisoning/causative attacks, which focus on the training phase of a machine learning model. We generate adversarial perturbations targeting the node’s features and the graph structure, thus, taking the dependencies between instances in account. Moreover, we ensure that the perturbations remain unnoticeable by preserving important data characteristics. To cope with the underlying discrete domain we propose an efficient algorithm Nettack exploiting incremental computations. Our experimental study shows that accuracy of node classification significantly drops even when performing only few perturbations. Even more, our attacks are transferable: the learned attacks generalize to other state-of-the-art node classification models and unsupervised approaches, and likewise are successful even when only limited knowledge about the graph is given.
关 键 词: 深度学习模型; 对属性图; 对抗性扰动; 节点分类模型
课程来源: 视频讲座网
数据采集: 2022-11-23:chenjy
最后编审: 2022-11-23:chenjy
阅读次数: 37