본문 바로가기

쓰기

 
발표자 김누리 
발표일자 2021-05-13 
저자 Li, Junnan, Caiming Xiong, and Steven Hoi 
학회명 arXiv preprint arXiv:2011.11183 (2020) 
논문지  

Semi-supervised learning has been an effective paradigm for leveraging unlabeled data to reduce the reliance on labeled data. We propose CoMatch, a new semi-supervised learning method that unifies dominant approaches and addresses their limitations. CoMatch jointly learns two representations of the training data, their class probabilities and low-dimensional embeddings. The two representations interact with each other to jointly evolve. The embeddings impose a smoothness constraint on the class probabilities to improve the pseudo-labels, whereas the pseudo-labels regularize the structure of the embeddings through graph-based contrastive learning. CoMatch achieves state-of-the-art performance on multiple datasets. It achieves substantial accuracy improvements on the label-scarce CIFAR-10 and STL-10. On ImageNet with 1% labels, CoMatch achieves a top-1 accuracy of 66.0%, outperforming FixMatch [34] by 12.6%. Furthermore, CoMatch achieves better representation learning performance on downstream tasks, outperforming both supervised learning and self-supervised learning. Code and pre-trained models are available at https://github.com/salesforce/CoMatch/

CoMatch Fig2.png

    2022

      Activated Gradients for Deep Neural Networks
      2022.04.25
      발표자: 지혜림     발표일자: 2022-04-25     저자: Mei Liu, Liangming Chen, Xiaohao Du, Long Jin, Senior Member, IEEE, and Mingsheng Shang     학회명: TNNLS(IEEE Transactions on Neural Networks and Learning Systems)