Regularly Truncated M-Estimators for Learning With Noisy Labels

Xiaobo Xia, Pengqian Lu, Chen Gong, Bo Han, Jun Yu*, Jun Yu, Tongliang Liu

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

1 Citation (Scopus)

Abstract

The <italic>sample selection</italic> approach is very popular in learning with noisy labels. As deep networks <italic>&#x201C;learn pattern first&#x201D;</italic>, prior methods built on sample selection share a similar training procedure: the small-loss examples can be regarded as clean examples and used for helping generalization, while the large-loss examples are treated as mislabeled ones and excluded from network parameter updates. However, such a procedure is <italic>arguably debatable</italic> from two folds: (a) it does not consider the bad influence of noisy labels in selected small-loss examples; (b) it does not make good use of the discarded large-loss examples, which may be clean or have meaningful information for generalization. In this paper, we propose regularly truncated M-estimators (RTME) to address the above two issues <italic>simultaneously</italic>. Specifically, RTME can <italic>alternately switch modes between truncated M-estimators and original M-estimators</italic>. The former can <italic>adaptively</italic> select small-losses examples without knowing the noise rate and reduce the side-effects of noisy labels in them. The latter makes the possibly clean examples but with large losses involved to help generalization. Theoretically, we demonstrate that our strategies are label-noise-tolerant. Empirically, comprehensive experimental results show that our method can outperform multiple baselines and is robust to broad noise types and levels. The implementation is available at <uri>https://github.com/xiaoboxia/RTM_LNL</uri>.

Original languageEnglish
Pages (from-to)3522-3536
Number of pages15
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume46
Issue number5
DOIs
Publication statusPublished - 28 Dec 2023

Scopus Subject Areas

  • Software
  • Computer Vision and Pattern Recognition
  • Computational Theory and Mathematics
  • Artificial Intelligence
  • Applied Mathematics

User-Defined Keywords

  • Australia
  • Computer science
  • Generalization
  • learning with noisy labels
  • Noise measurement
  • Random variables
  • regularly truncated m-estimators
  • sample selection
  • Switches
  • Training
  • Training data
  • truncated m-estimators

Fingerprint

Dive into the research topics of 'Regularly Truncated M-Estimators for Learning With Noisy Labels'. Together they form a unique fingerprint.

Cite this