Skip to yearly menu bar Skip to main content


Poster

Understanding and Utilizing Deep Neural Networks Trained with Noisy Labels

Pengfei Chen · Ben Liao · Guangyong Chen · Shengyu Zhang

Pacific Ballroom #154

Keywords: [ Supervised Learning ] [ Robust Statistics and Machine Learning ] [ Computer Vision ] [ Algorithms ]


Abstract:

Noisy labels are ubiquitous in real-world datasets, which poses a challenge for robustly training deep neural networks (DNNs) as DNNs usually have the high capacity to memorize the noisy labels. In this paper, we find that the test accuracy can be quantitatively characterized in terms of the noise ratio in datasets. In particular, the test accuracy is a quadratic function of the noise ratio in the case of symmetric noise, which explains the experimental findings previously published. Based on our analysis, we apply cross-validation to randomly split noisy datasets, which identifies most samples that have correct labels. Then we adopt the Co-teaching strategy which takes full advantage of the identified samples to train DNNs robustly against noisy labels. Compared with extensive state-of-the-art methods, our strategy consistently improves the generalization performance of DNNs under both synthetic and real-world training noise.

Live content is unavailable. Log in and register to view live content