1. 相关工作主干
    1. Deep Models Under the GAN | Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security CCS17
      1. Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning | IEEE Conference Publication | IEEE Xplore IEEE INFOCOM 2019
      2. Deep Leakage from Gradients (nips.cc)(nips2019)
      3. GANobfuscator: Mitigating Information Leakage Under GAN via Differential Privacy IEEE Transactions on Information Forensics and Security 19****
      4. A Method of Information Protection for Collaborative Deep Learning under GAN Model Attack IEEE/ACM Transactions on Computational Biology and Bioinformatics 19**** By setting the buried point to detect a generative adversarial network (GAN) attack in the network and adjusting the training parameters, training based on the GAN model attack is forced to be invalid, and the information is effectively protected.
  2. 毕设工作
    1. 复现 CCS17 的工作

    2. 想提出攻击结果的评价指标,但是效果不好 重构误差

      $$ d=min_g\sum_x^{N_x}\sum_y^{N_y}|f(x,y)-g(x,y)| ,g∈G $$

    3. 针对 CCS17 的GAN攻击存在的噪声问题,指出可能原因:恶意用户本地的任务与全局任务冲突,优化方向不一致,导致噪声

    4. 存在实现问题,差分隐私加入后导致模型准确率下降,并且不随迭代轮数增加而缓和。