Skip to yearly menu bar Skip to main content


Poster

Does Label Smoothing Help Deep Partial Label Learning?

Xiuwen Gong · Nitin Bisht · Guandong Xu


Abstract:

Although deep partial label learning (deep PLL) classifiers have shown their competitive performance, they are heavily influenced by the noisy false-positive labels leading to poorer performance as the training progresses. Meanwhile, existing deep PLL research lacks theoretical guarantee on the analysis of correlation between label noise (or ambiguity degree) and classification performance. This paper addresses the above limitations with label smoothing (LS) from both theoretical and empirical aspects. In theory, we prove lower and upper bounds of the expected risk to show that label smoothing can help deep PLL.We further derive the optimal smoothing rate to investigate the conditions, i.e., when label smoothingbenefits deep PLL. Meanwhile, we pioneer the theorectical guarantee on the correlation between the classification performance (determined by the optimal smoothing rate) and label noise (quantified by the generalized ambiguity degree). We also provide an estimation error bound of the empirical risk to further validate the effectiveness of label smoothing for deep PLL. In practice, we design a benchmark solution and a novel optimization algorithm called Label Smoothing-based Partial Label Learning (LS-PLL). Extensive experimental results on benchmark PLL datasets and various deep architectures validate that label smoothing does help deep PLL in improving classification performance and learning distinguishable representations, and the best results can be achieved when the empirical smoothing rate approximately approaches the optimal smoothing rate in theoretical findings. Code is publicly available at https://github.com/kalpiree/LS-PLL.

Live content is unavailable. Log in and register to view live content