Main content area

Evaluation of pre-training impact on fine-tuning for remote sensing scene classification

Yuan, Man, Liu, Zhi, Wang, Fan
Remote sensing letters 2019 v.10 no.1 pp. 49-58
data collection, models, remote sensing
Fine-tuning a Deep Convolutional Neural Network (DCNN) model obtained from complicated and tedious pre-training on large-scale datasets seems to have become a standard baseline for various tasks with scale-limited datasets. However, there is little work done to investigate into the pre-training’s impact on fine-tuning. In this letter, we systematically verify that pre-training’s impact originating from large-scale datasets in other domain is still insurmountable for full-training with mainstream datasets in the Remote Sensing Scene Classification (RSSC) domain. However, we confirm that this impact is task-related. That is, once the need of pre-training is satisfied, excessive pre-training may lead to generalization degradation of following fine-tuning. This counter-intuitive finding explains why identical pre-training results in different generalization disparity in various tasks. We also provide experimental evidence to show that excessive or noisy pre-training is possible to make fine-tuning perform worse than full-training in some RSSC tasks. It is worth noticing that possible factors, including dataset, initialization, optimization and domain gap are taken into consideration, and more enlightening conclusions are given through this evaluation.