Presentation Information
[4Yin-A-49]Applying Batch Normalization Re-training in Transfer Learning to Class-Incremental Learning
〇Chika Obata1, Sora Togawa1, Kenya Jinno1 (1. Tokyo City University)
Keywords:
Continual Learning,Batch Normalization,Catastrophic Forgetting
Deep neural networks pre-trained on large-scale datasets exhibit strong transfer performance; however, in continual learning settings where classes are incrementally introduced, catastrophic forgetting inevitably arises. In this study, we investigate class-incremental learning using an ImageNet-pretrained ResNet-18 as a feature extractor, focusing on the re-training of Batch Normalization (BN) layers and classifier updates. The primary objective of this work is not domain adaptation from ImageNet to CIFAR-10, but rather adaptation to feature distribution shifts caused by incremental class additions within the same dataset.
We compare three training strategies: Head-only (updating only the classifier), BN+Head (updating the classifier and the affine parameters γ and β of BN layers), and Full FT (updating all parameters). Experimental results on CIFAR-10 show that BN+Head achieves performance comparable to or better than Full FT, while demonstrating reduced forgetting compared to both Head-only and Full FT. These findings suggest that updating BN layers enables adaptation to new classes through minimal distributional adjustments while maintaining consistency with fixed old classifiers.
We compare three training strategies: Head-only (updating only the classifier), BN+Head (updating the classifier and the affine parameters γ and β of BN layers), and Full FT (updating all parameters). Experimental results on CIFAR-10 show that BN+Head achieves performance comparable to or better than Full FT, while demonstrating reduced forgetting compared to both Head-only and Full FT. These findings suggest that updating BN layers enables adaptation to new classes through minimal distributional adjustments while maintaining consistency with fixed old classifiers.
