본문 바로가기

쓰기

 
발표자 김누리 
발표일자 2022-02-22 
저자 Lerner, Boaz, Guy Shiran, and Daphna Weinshall 
학회명 arXiv preprint arXiv:2012.00504 (2020) 
논문지  
Recently, Semi-Supervised Learning (SSL) has shown much promise in leveraging unlabeled data while being provided with very few labels. In this paper, we show that ignoring the labels altogether for whole epochs intermittently during training can significantly improve performance in the small sample regime. More specifically, we propose to train a network on two tasks jointly. The primary classification task is exposed to both the unlabeled and the scarcely annotated data, whereas the secondary task seeks to cluster the data without any labels. As opposed to hand-crafted pretext tasks frequently used in self-supervision, our clustering phase utilizes the same classification network and head in an attempt to relax the primary task and propagate the information from the labels without overfitting them. On top of that, the self-supervised technique of classifying image rotations is incorporated during the unsupervised learning phase to stabilize training. We demonstrate our method's efficacy in boosting several state-of-the-art SSL algorithms, significantly improving their results and reducing running time in various standard semi-supervised benchmarks, including 92.6% accuracy on CIFAR-10 and 96.9% on SVHN, using only 4 labels per class in each task. We also notably improve the results in the extreme cases of 1,2 and 3 labels per class, and show that features learned by our model are more meaningful for separating the data.

    2022

      Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
      2022.05.02
      발표자: 조영성     발표일자: 2022-05-02     저자: Ali Shafahi, W Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein.     학회명: NIPS 2018     논문지: Advances in Neural Information Processing Systems, pages 6103–6113, 2018.    
      Activated Gradients for Deep Neural Networks
      2022.04.25
      발표자: 지혜림     발표일자: 2022-04-25     저자: Mei Liu, Liangming Chen, Xiaohao Du, Long Jin, Senior Member, IEEE, and Mingsheng Shang     학회명: TNNLS(IEEE Transactions on Neural Networks and Learning Systems)