본문 바로가기

쓰기

 
발표자 김누리 
발표일자 2021-10-05 
저자 Carlini, Nicholas, Ulfar Erlingsson, and Nicolas Papernot 
학회명 arXiv preprint arXiv:1910.13427 (2019). 
논문지  
We develop techniques to quantify the degree to which a given (training or testing) example is an outlier in the underlying distribution. We evaluate five methods to score examples in a dataset by how well-represented the examples are, for different plausible definitions of "well-represented", and apply these to four common datasets: MNIST, Fashion-MNIST, CIFAR-10, and ImageNet. Despite being independent approaches, we find all five are highly correlated, suggesting that the notion of being well-represented can be quantified. Among other uses, we find these methods can be combined to identify (a) prototypical examples (that match human expectations); (b) memorized training examples; and, (c) uncommon submodes of the dataset. Further, we show how we can utilize our metrics to determine an improved ordering for curriculum learning, and impact adversarial robustness. We release all metric values on training and test sets we studied.

    2022

      Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
      2022.05.02
      발표자: 조영성     발표일자: 2022-05-02     저자: Ali Shafahi, W Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein.     학회명: NIPS 2018     논문지: Advances in Neural Information Processing Systems, pages 6103–6113, 2018.    
      Activated Gradients for Deep Neural Networks
      2022.04.25
      발표자: 지혜림     발표일자: 2022-04-25     저자: Mei Liu, Liangming Chen, Xiaohao Du, Long Jin, Senior Member, IEEE, and Mingsheng Shang     학회명: TNNLS(IEEE Transactions on Neural Networks and Learning Systems)