TY - JOUR
T1 - FN-NET: Adaptive data augmentation network for fine-grained visual categorization
AU - Ye, Shuo
AU - Peng, Qinmu
AU - Cheung, Yiu-ming
AU - Wang, Yu
AU - Zou, Ziqian
AU - You, Xinge
N1 - Funding Information:
This work was supported in part by the National Key R&D Program of China 2022YFC3301000, in part by the Fundamental Research Funds for the Central Universities, China , HUST: 2023JYCXJJ031.
Publisher copyright:
© 2025 Elsevier Ltd. All rights are reserved, including those for text and data mining, AI training, and similar technologies.
PY - 2025/3/28
Y1 - 2025/3/28
N2 - Data augmentation significantly contributes to enhancing model performance, robustness, and generalization ability. However, existing methods struggle when applied directly to fine-grained targets. Particularly during perspective changes, significant details carried by local regions may be obscured or altered, making data augmentation at this point prone to severe overfitting. We argue that subclasses have common discriminative features, and these features exhibit a certain degree of complementarity. Therefore, in this paper, we propose a novel data augmentation framework for fine-grained targets called the feature expansion and noise fusion network (FN-Net). Specifically, a lightweight branch (aug-branch) is introduced in the middle layer of the convolutional neural network. Feature expansion is involved in this branch, which creates new semantic combinations from multiple instances by exchanging discriminative regions within the same subclass in the feature space. Noise fusion preserves the noise distribution of the current subclass, enhancing the model’s robustness and improving its understanding of instances in real-world environment. Additionally, to prevent potential disruptions to the original feature combinations caused by the feature expansion process, distillation loss is employed to facilitate the learning process of the aug-branch. We evaluate FN-Net on three FGVC benchmark datasets. The experimental results demonstrate that our method consistently outperforms the state-of-the-art approaches on different depths and types of network backbone structures.
AB - Data augmentation significantly contributes to enhancing model performance, robustness, and generalization ability. However, existing methods struggle when applied directly to fine-grained targets. Particularly during perspective changes, significant details carried by local regions may be obscured or altered, making data augmentation at this point prone to severe overfitting. We argue that subclasses have common discriminative features, and these features exhibit a certain degree of complementarity. Therefore, in this paper, we propose a novel data augmentation framework for fine-grained targets called the feature expansion and noise fusion network (FN-Net). Specifically, a lightweight branch (aug-branch) is introduced in the middle layer of the convolutional neural network. Feature expansion is involved in this branch, which creates new semantic combinations from multiple instances by exchanging discriminative regions within the same subclass in the feature space. Noise fusion preserves the noise distribution of the current subclass, enhancing the model’s robustness and improving its understanding of instances in real-world environment. Additionally, to prevent potential disruptions to the original feature combinations caused by the feature expansion process, distillation loss is employed to facilitate the learning process of the aug-branch. We evaluate FN-Net on three FGVC benchmark datasets. The experimental results demonstrate that our method consistently outperforms the state-of-the-art approaches on different depths and types of network backbone structures.
KW - Data augmentation
KW - Fine-grained visual categorization
KW - Knowledge distillation
UR - http://www.scopus.com/inward/record.url?scp=105001829654&partnerID=8YFLogxK
U2 - 10.1016/j.patcog.2025.111618
DO - 10.1016/j.patcog.2025.111618
M3 - Journal article
SN - 0031-3203
VL - 165
JO - Pattern Recognition
JF - Pattern Recognition
M1 - 111618
ER -