Abstract
The <italic>sample selection</italic> approach is very popular in learning with noisy labels. As deep networks <italic>“learn pattern first”</italic>, prior methods built on sample selection share a similar training procedure: the small-loss examples can be regarded as clean examples and used for helping generalization, while the large-loss examples are treated as mislabeled ones and excluded from network parameter updates. However, such a procedure is <italic>arguably debatable</italic> from two folds: (a) it does not consider the bad influence of noisy labels in selected small-loss examples; (b) it does not make good use of the discarded large-loss examples, which may be clean or have meaningful information for generalization. In this paper, we propose regularly truncated M-estimators (RTME) to address the above two issues <italic>simultaneously</italic>. Specifically, RTME can <italic>alternately switch modes between truncated M-estimators and original M-estimators</italic>. The former can <italic>adaptively</italic> select small-losses examples without knowing the noise rate and reduce the side-effects of noisy labels in them. The latter makes the possibly clean examples but with large losses involved to help generalization. Theoretically, we demonstrate that our strategies are label-noise-tolerant. Empirically, comprehensive experimental results show that our method can outperform multiple baselines and is robust to broad noise types and levels. The implementation is available at <uri>https://github.com/xiaoboxia/RTM_LNL</uri>.
| Original language | English |
|---|---|
| Pages (from-to) | 3522-3536 |
| Number of pages | 15 |
| Journal | IEEE Transactions on Pattern Analysis and Machine Intelligence |
| Volume | 46 |
| Issue number | 5 |
| DOIs | |
| Publication status | Published - 28 Dec 2023 |
User-Defined Keywords
- Australia
- Computer science
- Generalization
- learning with noisy labels
- Noise measurement
- Random variables
- regularly truncated m-estimators
- sample selection
- Switches
- Training
- Training data
- truncated m-estimators
Fingerprint
Dive into the research topics of 'Regularly Truncated M-Estimators for Learning With Noisy Labels'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver