본문 바로가기

쓰기

 
발표자 김누리 
발표일자 2022-10-05 
저자 Wang, Xudong, et al. 
학회명 Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. 
논문지  
This work studies the bias issue of pseudo-labeling, a natural phenomenon that widely occurs but often overlooked by prior research. Pseudo-labels are generated when a classifier trained on source data is transferred to unlabeled target data. We observe heavy long-tailed pseudo-labels when a semi-supervised learning model FixMatch predicts labels on the unlabeled set even though the unlabeled data is curated to be balanced. Without intervention, the training model inherits the bias from the pseudo-labels and end up being sub-optimal. To eliminate the model bias, we propose a simple yet effective method DebiasMatch, comprising of an adaptive debiasing module and an adaptive marginal loss. The strength of debiasing and the size of margins can be automatically adjusted by making use of an online updated queue. Benchmarked on ImageNet-1K, DebiasMatch significantly outperforms previous state-of-the-arts by more than 26% and 10.5% on semi-supervised learning (0.2% annotated data) and zero-shot learning tasks respectively.

    2023

      Segment Anything
      2023.06.21
      발표자: 송경렬     발표일자: 2023-06-21     저자: Alexander Kirillov1,2,4 , Eric Mintun2 Nikhila Ravi1,2 , Hanzi Mao2 , Chloe Rolland3 ,Laura Gustafson3 Tete Xiao3 , Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Doll´ar4 , Ross Girshick4 1project lead, 2joint first author, 3equal contribution, 4directional lead     학회명: arxiv 2023    
      Does Knowledge Distillation Really Work?
      2023.06.21
      발표자: 안재한     발표일자: 2023-06-21     저자: Samuel Stanton, Pavel Izmailov, Polina Kirichenko, Alexander A. Alemi, Andrew Gordon Wilson     학회명: NeurIPS 2021