Deep Leakage from Gradients (nips.cc)(nips2019)

A Method of Information Protection for Collaborative Deep Learning under GAN Model Attack

By setting the buried point to detect a generative adversarial network (GAN) attack in the network and adjusting the training parameters, training based on the GAN model attack is forced to be invalid, and the information is effectively protected.

GANobfuscator: Mitigating Information Leakage Under GAN via Differential Privacy

总结有以下几点:

  1. 目前的工作:
    1. 经典联邦学习场景下的GAN攻击,窃取数据,2017
    2. GAN攻击窃取数据后,用数据增强攻击
      1. 后投毒攻击
      2. 增强成员推理攻击
    3. GAN应用于服务器侧,多任务GAN
  2. 关于其他生成模型用于数据窃取方面,从应用原理上看,生成对抗网络是将生成器网络于作为判别器网络的联邦学习全局模型进行对抗,实现GAN的训练,而别的生成模型,主要是AE及其衍生,是依据原始数据中得到的拟合正态分布的参数,由这些参数确定的正态分布来进行数据的生成,目前没有找到和联邦学习的契合点。

[1907.02189] On the Convergence of FedAvg on Non-IID Data (arxiv.org)

Deep Leakage from Gradients

GAN-Based Information Leakage Attack Detection in Federated Learning (hindawi.com)