본문 바로가기

쓰기

 
발표자 김누리 
발표일자 2021-03-10 
저자 Reza Esfandiarpoor, Mohsen Hajabdollahi, Stephen H. Bach 
학회명  
논문지  

Our masking model pipeline.png


Abstract. In many practical few-shot learning problems, even though labeled examples are scarce, there are abundant auxiliary data sets that potentially contain useful information. We propose a framework to address the challenges of efficiently selecting and effectively using auxiliary data in image classification. Given an auxiliary dataset and a notion of semantic similarity among classes, we automatically select pseudo shots, which are labeled examples from other classes related to the target task. We show that naively assuming that these additional examples come from the same distribution as the target task examples does not significantly improve accuracy. Instead, we propose a masking module that adjusts the features of auxiliary data to be more similar to those of the target classes. We show that this masking module can improve accuracy by up to 18 accuracy points, particularly when the auxiliary data is semantically distant from the target task. We also show that incorporating pseudo shots improves over the current state-of-the-art few-shot image classification scores by an average of 4.81 percentage points of accuracy on 1-shot tasks and an average of 0.31 percentage points on 5-shot tasks.



    2021

      Open Compound Domain Adaptation
      2021.05.20
      발표자: 이진섭     발표일자: 2021-05-20     저자: Ziwei Liu¹ , Zhongqi Miao², Xiaohang Pan¹, Xiaohang Zhan¹, Dahua Lin¹, Boqing Gong³ , Stella X. Yu²     학회명: 2020 CVPR    
      Do ImageNet Classifiers Generalize to ImageNet?
      2021.04.28
      발표자: 조영성     발표일자: 2021-04-28     저자: Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, Vaishaal Shankar     학회명: International Conference on Machine Learning