Inventor(s)

HP INCFollow

Abstract

In this article, we propose AdaptDL, a co-optimization framework to simultaneously compress the training and inference overhead of convolutional neural networks (CNNs). AdaptDL consists of two novel components: 1) multi-dimensional model compression (MMC) and 2) resolution-adaptive training (RAT). Taking both training and inference efficiency as optimization goals, MMC models the compression of the three dimensions (depth, width, and resolution) of CNNs as a multi-objective optimization problem and efficiently solve the optimal compression strategy with evolutionary algorithms. Subsequently, RAT further optimizes the training efficiency by introducing a progressively growing training resolution. Experiments on CIFAR-10 and ImageNet-1K validate the superiority of AdaptDL over other state-of-the-art approaches.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution-Share Alike 4.0 License.

Share

COinS