본문 바로가기

쓰기

 
발표자 김누리 
발표일자 2021-11-23 
저자 Zhang, Bowen, et al. 
학회명 Advances in Neural Information Processing Systems 34 (2021) 
논문지  
The recently proposed FixMatch achieved state-of-the-art results on most semi-supervised learning (SSL) benchmarks. However, like other modern SSL algorithms, FixMatch uses a pre-defined constant threshold for all classes to select unlabeled data that contribute to the training, thus failing to consider different learning status and learning difficulties of different classes. To address this issue, we propose Curriculum Pseudo Labeling (CPL), a curriculum learning approach to leverage unlabeled data according to the model's learning status. The core of CPL is to flexibly adjust thresholds for different classes at each time step to let pass informative unlabeled data and their pseudo labels. CPL does not introduce additional parameters or computations (forward or backward propagation). We apply CPL to FixMatch and call our improved algorithm FlexMatch. FlexMatch achieves state-of-the-art performance on a variety of SSL benchmarks, with especially strong performances when the labeled data are extremely limited or when the task is challenging. For example, FlexMatch outperforms FixMatch by 14.32% and 24.55% on CIFAR-100 and STL-10 datasets respectively, when there are only 4 labels per class. CPL also significantly boosts the convergence speed, e.g., FlexMatch can use only 1/5 training time of FixMatch to achieve even better performance. Furthermore, we show that CPL can be easily adapted to other SSL algorithms and remarkably improve their performances. We open-source our code at https://github.com/TorchSSL/TorchSSL.

    2021

      Revisiting Skeleton-based Action Recognition
      2021.12.21
      발표자: 배현재     발표일자: 2021-12-21     저자: Haodong Duan, Yue Zhao, Kai Chen, Dian Shao, Dahua Lin, Bo Dai     학회명: CVPR 2021    
      Proximal Policy Optimization Algorithms
      2021.12.15
      발표자: 길창배     발표일자: 2021-12-15     저자: John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov     논문지: arxiv 2017    
      Deep Leakage from Gradients
      2021.11.09
      발표자: 홍만수     발표일자: 2021-11-09     저자: Ligeng Zhu, Zhijian Liu, Song Han     학회명: NeurIPS 2019