Title |
Comparison of Structure Reduction, Pruning, and Knowledge Distillation for Lightning of Deep Learning |
DOI |
https://doi.org/10.5370/KIEE.2021.70.12.1934 |
Keywords |
Deep learning; Structure Reduction; Pruning; Knowledge Distillation; CIFAR10/100. ResNet56/110 |
Abstract |
We compare three approaches of structure reduction, pruning, and knowledge distillation for lightning of a deep learning network. Structure reduction eliminates a set of layers of the model, but pruning deletes filters within a layer. Knowledge distillation effectively learns a small student model from a large teacher model using KL Divergence. Therefore, it has a similar effect of reduction of the model. The above three methods for lightning are rarely compared to each other in terms of performance. To compare these approaches for network reduction problem, we investigate the accuracy and flops of the methods on CIFAR10 and CIFAR100 data for ResNet models. A systematic analysis for the fundamental orientations and differences of each method is supplemented. |