Vincent Gripon's Homepage

Research and Teaching Blog

Compression of Deep Neural Networks on the Fly

G. Soulié, V. Gripon and M. Robert, "Compression of Deep Neural Networks on the Fly," in Lecture Notes in Computer Science, Volume 9887, pp. 153--170, September 2016.

Thanks to their state-of-the-art performance, deep neural networks are increasingly used for object recognition. To achieve the best results, they use millions of parameters to be trained. However, when targetting embedded applications the size of these models becomes problematic. As a consequence, their usage on smartphones or other resource limited devices is prohibited. In this paper we introduce a novel compression method for deep neural networks that is performed during the learning phase. It consists in adding an extra regularization term to the cost function of fully-connected layers. We combine this method with Product Quantization (PQ) of the trained weights for higher savings in storage consumption. We evaluate our method on two data sets (MNIST and CIFAR10), on which we achieve significantly larger compression rates than state-of-the-art methods.

Download manuscript.

Bibtex
@article{SouGriRob20169,
  author = {Guillaume Soulié and Vincent Gripon and
Maëlys Robert},
  title = {Compression of Deep Neural Networks on the
Fly},
  journal = {Lecture Notes in Computer Science},
  year = {2016},
  volume = {9887},
  pages = {153--170},
  month = {September},
}




You are the 1975837th visitor

Vincent Gripon's Homepage