Site de Vincent Gripon

Blog sur mes recherches et mon enseignement

Articles en conferences

2022

R. Baena, L. Drumetz et V. Gripon, "Un Mixup Local pour empêcher les intrusions de variétés," dans GRETSI, 2022. Manuscrit.

R. Lafargue, J. Diguet, V. Gripon et B. Pasdeloup, "Classes adversaires dans l'apprentissage avec peu d'exemples," dans GRETSI, 2022. Manuscrit.

Y. E. Ouahidi, H. Tessier, G. Lioi, N. Farrugia, B. Pasdeloup et V. Gripon, "Élagage de réseaux de neurones convolutifs sur graphes pour la sélection de fréquences significatives pour le décodage d'IRMf," dans GRETSI, 2022. Manuscrit.

H. Tessier, V. Gripon, M. Léonardon, M. Arzel, T. Hannagan et D. Bertrand, "Élagage de réseaux profond de neurones par dégradation sélective des pondérations," dans GRETSI, 2022. Manuscrit.

H. T. V. G. M. L. M. A. D. Bertrand et T. Hannagan, "Leveraging Structured Pruning of Convolutional Neural Networks," dans IEEE SiPS, Rennes, France, novembre 2022. Manuscrit.

H. Y. H. L. B. M. Léonardon et V. Gripon, "Inter-Operability of Compression Techniques for Efficient Deployment of CNNs on Microcontrollers," dans SYSINT: International Conference on System-Integrated Intelligence, Genova, Italy, pp. 543--552, septembre 2022. Manuscrit.

H. T. V. G. M. L. M. A. D. Bertrand et T. Hannagan, "Energy Consumption Analysis of pruned Semantic Segmentation Networks on an Embedded GPU," dans SYSINT: International Conference on System-Integrated Intelligence, Genova, Italy, pp. 553--563, septembre 2022. Manuscrit.

R. B. L. Drumetz et V. Gripon, "A Local Mixup to Prevent Manifold Intrusion," dans EUSIPCO, pp. 1372--1376, août 2022. Manuscrit.

H. Tessier, V. Gripon, M. Léonardon, M. Arzel, D. Bertrand et T. Hannagan, "Investigating the Not-So-Obvious Effects of Structured Pruning," dans ICML HAET Workshop, juillet 2022. Manuscrit.

H. T. V. G. M. L. M. A. D. Bertrand et T. Hannagan, "Investigating the Not-So-Obvious Effects of Structured Pruning," dans ICML - Hardware-aware efficient training (HAET) workshop, juillet 2022. Manuscrit.

2021

T. Giraudon, V. Gripon, M. Löwe et F. Vermet, "Towards an Intrinsic Definition of Robustness for a Classifier," dans IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4015--4019, 2021. Manuscrit.

Y. Hu, V. Gripon et S. Pateux, "Leveraging the Feature Distribution in Transfer-based Few-Shot Learning," dans International Conference on Artificial Neural Networks, pp. 487--499, septembre 2021. Manuscrit.

C. Lassance, V. Gripon et G. Mateos, "Graph topology inference benchmarks for machine learning," dans IEEE 30th International Workshop on Machine Learning for Signal Processing (MLSP), pp. 1--6, septembre 2021. Manuscrit.

M. Hamidouche, C. Lassance, Y. Hu, L. Drumetz, B. Pasdeloup et V. Gripon, "Improving Classification Accuracy with Graph Filtering," dans IEEE International Conference on Image Processing (ICIP), pp. 334--338, septembre 2021. Manuscrit.

M. Bontonou, G. Lioi, N. Farrugia et V. Gripon, "Few-shot Decoding of Brain Activation Maps," dans 29th European Signal Processing Conference (EUSIPCO), pp. 1326-1330, août 2021. Manuscrit.

R. Baena, L. Drumetz et V. Gripon, "Inferring Graph Signal Translations as Invariant Transformations for Classification Tasks," dans 29th European Signal Processing Conference (EUSIPCO), pp. 2169--2173, août 2021. Manuscrit.

M. Léonardon et V. Gripon, "Using Deep Neural Networks to Predict and Improve the Performance of Polar Codes," dans 11th International Symposium on Topics in Coding (ISTC), pp. 1--5, août 2021. Manuscrit.

G. B. Hacene, C. Lassance, V. Gripon, M. Courbariaux et Y. Bengio, "Attention Based Pruning for Shift Networks," dans 25th International Conference on Pattern Recognition (ICPR), pp. 4054--4061, janvier 2021. Manuscrit.

Y. H. V. Gripon et S. Pateux, "Graph-based Interpolation of Feature Vectors for Accurate Few-Shot Classification," dans 25th International Conference on Pattern Recognition (ICPR), pp. 8164--8171, janvier 2021. Manuscrit.

2020

G. B. Hacene, V. Gripon, M. Arzel, N. Farrugia et Y. Bengio, "Quantized Guided Pruning for Efficient Hardware Implementations of Convolutional Neural Networks," dans 18th IEEE International New Circuits and Systems Conference (NEWCAS), pp. 206--209, 2020. Manuscrit.

M. Nikolić, G. B. Hacene, C. Bannon, A. D. Lascorz, M. Courbariaux, Y. Bengio, V. Gripon et A. Moshovos, "BitPruning: Learning Bitlengths for Aggressive and Accurate Quantization," dans ArXiv Preprint, 2020. Manuscrit.

C. Lassance, L. Béthune, M. Bontonou, M. Hamidouche et V. Gripon, "Ranking Deep Learning Generalization using Label Variation in Latent Geometry Graphs," dans ArXiv Preprint, 2020. Manuscrit.

L. Khacef, V. Gripon et B. Miramond, "GPU-based Self-Organizing Maps for Post-Labeled Few-Shot Unsupervised Learning," dans International Conference on Neural Information Processing, pp. 404--416, août 2020. Manuscrit.

G. B. Hacene, V. Gripon, N. Farrugia, M. Arzel et M. Jezequel, "Efficient Hardware Implementation of Incremental Learning and Inference on Chip," dans 17th IEEE International New Circuits and Systems Conference (NEWCAS), pp. 206--209, juin 2020.

C. Lassance, M. Bontonou, G. B. Hacene, V. Gripon, J. Tang et A. Ortega, "Deep geometric knowledge distillation with graphs," dans IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8484-8488, mai 2020.

2019

Q. Jodelet, V. Gripon et M. Hagiwara, "Transfer Learning with Sparse Associative Memories," dans International Conference on Artificial Neural Networks, pp. 497--512, 2019. Manuscrit.

G. B. Hacene, F. Leduc-Primeau, A. B. Soussia, V. Gripon et F. Gagnon, "Robustesse des réseaux de neurones profonds aux défaillances mémoire," dans GRETSI, août 2019. Manuscrit.

M. Bontonou, C. Lassance, J. Vialatte et V. Gripon, "Un modèle unifié pour la classification de signaux sur graphe avec de l’apprentissage profond," dans GRETSI, août 2019. Manuscrit.

C. Lassance, V. Gripon, J. Tang et A. Ortega, "Robustesse structurelle des architectures d’apprentissage profond," dans GRETSI, août 2019. Manuscrit.

M. Bontonou, C. Lassance, V. Gripon et N. Farrugia, "Comparing linear structure-based and data-driven latent spatial representations for sequence prediction," dans Wavelets and Sparsity XVIII, San Diego, USA, août 2019. Manuscrit.

C. Lassance, V. Gripon, J. Tang et A. Ortega, "Structural Robustness for Deep Learning Architectures," dans Data Science Workshop, pp. 125--129, juin 2019. Manuscrit.

M. Bontonou, C. Lassance, G. B. Hacene, V. Gripon, J. Tang et A. Ortega, "Introducing Graph Smoothness Loss for Training Deep Learning Architectures," dans Data Science Workshop, pp. 160--164, juin 2019. Manuscrit.

G. B. Hacene, F. Leduc-Primeau, A. B. Soussia, V. Gripon et F. Gagnon, "Training modern deep neural networks for memory-fault robustness," dans Proceedings of the IEEE International Symposium on Circuits and Systems, pp. 1--5, mai 2019.

M. Bontonou, C. Lassance, J. Vialatte et V. Gripon, "A Unified Deep Learning Formalism For Processing Graph Signals," dans SDM Special Session on Graph Neural Networks, mai 2019. Manuscrit.

2018

N. Grelier, C. R. K. Lassance, E. Dupraz et V. Gripon, "Graph-Projected Signal Processing," dans IEEE GlobalSIP, pp. 763--767, 2018. Manuscrit.

C. E. R. K. Lassance, J. Vialatte et V. Gripon, "Matching Convolutional Neural Networks without Priors about Data," dans Proceedings of Data Science Workshop, pp. 234--238, 2018. Manuscrit.

V. Gripon, G. B. Hacene, M. Löwe et F. Vermet, "Improving Accuracy of Nonparametric Transfer Learning Via Vector Segmentation," dans proceedings of ICASSP, pp. 2966-2970, avril 2018. Manuscrit.

V. Gripon, A. Ortega et B. Girault, "An Inside Look at Deep Neural Networks using Graph Signal Processing," dans Proceedings of ITA, pp. 1--9, février 2018. Manuscrit.

2017

G. B. Hacene, V. Gripon, N. Farrugia, M. Arzel et M. Jezequel, "Budget Restricted Incremental Learning with Pre-Trained Convolutional Neural Networks and Binary Associative Memories," dans Proceedings of SIPS, pp. 1063--1073, 2017. Manuscrit.

G. B. Hacene, V. Gripon, N. Farrugia, M. Arzel et M. Jezequel, "Incremental Learning on Chip," dans Proceedings of GlobalSip, pp. 789--792, 2017. Manuscrit.

M. Ménoret, N. Farrugia, B. Pasdeloup et V. Gripon, "Evaluating Graph Signal Processing for Neuroimaging Through Classification and Dimensionality Reduction," dans Proceedings of GlobalSip, pp. 618--622, 2017. Manuscrit.

J. Vialatte, V. Gripon et G. Coppin, "Learning Local Receptive Fields and their Weight Sharing Scheme on Graphs," dans Proceedings of GlobalSip, pp. 623--627, 2017. Manuscrit.

V. Gripon, "Tropical Graph Signal Processing," dans Proceedings of the Asilomar conference, pp. 50--54, octobre 2017. Manuscrit.

E. Coyac, V. Gripon, C. Langlais et C. Berrou, "Robust Associative Memories Naturally Occuring From Recurrent Hebbian Networks Under Noise," dans Arxiv Preprint, septembre 2017. Manuscrit.

E. Coyac, V. Gripon, C. Langlais et C. Berrou, "Performance of Neural Clique Networks Subject to Synaptic Noise," dans Proceedings of Cognitive, pp. 4--9, février 2017. Manuscrit.

G. B. Hacene, V. Gripon, N. Farrugia, M. Arzel et M. Jezequel, "Finding All Matches in a Database using Binary Neural Networks," dans Proceedings of Cognitive, pp. 59--64, février 2017. Manuscrit.

T. Stérin, N. Farrugia et V. Gripon, "An Intrinsic Difference Between Vanilla RNNs and GRU Models," dans Proceedings of Cognitive, pp. 76--81, février 2017. Manuscrit.

2016

N. Grelier, B. Pasdeloup, J. Vialatte et V. Gripon, "Neighborhood-Preserving Translations on Graphs," dans Proceedings of GlobalSIP, pp. 410--414, octobre 2016.

E. Coyac, V. Gripon, C. Langlais et C. Berrou, "Distributed Coding and Synaptic Pruning," dans Proceedings of the 9th International Symposium on Turbo Codes and Iterative Information Processing, pp. 206--210, septembre 2016. Manuscrit.

A. Aboudib, V. Gripon et G. Coppin, "A Turbo-Inspired Iterative Approach for Correspondence Problems of Image Features," dans Proceedings of the 9th International Symposium on Turbo Codes and Iterative Information Processing, pp. 226--230, septembre 2016. Manuscrit.

P. Tigréat, C. R. K. Lassance, X. Jiang, V. Gripon et C. Berrou, "Assembly Output Codes for Learning Neural Networks," dans Proceedings of the 9th International Symposium on Turbo Codes and Iterative Information Processing, pp. 285--289, septembre 2016. Manuscrit.

D. Ferro, V. Gripon et X. Jiang, "Nearest Neighbour Search Using Binary Neural Networks," dans Proceedings of IJCNN, pp. 5106--5112, juillet 2016. Manuscrit.

2015

A. Mheich, M. Hassan, F. Wendling, M. Khalil, O. Dufor, V. Gripon et C. Berrou, "SimNet: A new algorithm for measuring brain networks similarity," dans Proceedings of the ICABME international conference, pp. 119--122, 2015. Manuscrit.

B. Pasdeloup, V. Gripon, G. Mercier et D. Pastor, "Vers une caractérisation de la courbe d'incertitude pour des graphes portant des signaux," dans Actes de la conférence GRETSI, 2015. Manuscrit.

R. Danilo, V. Gripon, P. Coussy et L. Conde-Canencia, "Réseaux de Clusters de Neurones Restreints," dans Actes de la conférence GRETSI, 2015. Manuscrit.

E. Coyac, V. Gripon et C. Langlais, "Impact du bruit synaptique sur les performances des réseaux de cliques neurales," dans Actes de la conférence GRETSI, 2015. Manuscrit.

B. Pasdeloup, M. Rabbat, V. Gripon, D. Pastor et G. Mercier, "Graph Reconstruction from the Observation of Diffused Signals," dans Proceedings of the 53rd Allerton Conference, pp. 1386--1390, octobre 2015. Manuscrit.

B. Pasdeloup, R. Alami, V. Gripon et M. Rabbat, "Toward an uncertainty principle for weighted graphs," dans Proceedings of the 23rd European Signal Processing Conference, pp. 1496--1500, juillet 2015.

R. Danilo, H. Jarollahi, V. Gripon, P. Coussy, L. Conde-Canencia et W. J. Gross, "Algorithm and Implementation of an Associative Memory for Oriented Edge Detection Using Improved Clustered Neural Networks," dans Proceedings of ISCAS Conference, pp. 2501--2504, mai 2015. Manuscrit.

R. Danilo, V. Gripon, P. Coussy, L. Conde-Canencia et W. J. Gross, "Restricted Clustered Neural Network for Storing Real Data," dans proceedings of GLSVLSI conference, pp. 205--210, mai 2015. Manuscrit.

A. Mheich, M. Hassan, V. Gripon, O. Dufor, M. Khalil, C. Berrou et F. Wendling, "A novel algorithm for measuring graph similarity: application to brain networks," dans Proceedings of the IEEE EMBS Neural Engineering Conference, pp. 1068--1071, avril 2015. Manuscrit.

A. Aboudib, V. Gripon et G. Coppin, "A Model of Bottom-Up Visual Attention Using Cortical Magnification," dans Proceedings of ICASSP, pp. 1493--1497, avril 2015. Manuscrit.

E. S. Gooya, D. Pastor et V. Gripon, "Automatic face recognition using SIFT and networks of tagged neural cliques," dans Proceedings of Cognitive, pp. 57--61, mars 2015. Manuscrit.

S. Larroque, E. S. Gooya, V. Gripon et D. Pastor, "Using Tags to Improve Diversity of Sparse Associative Memories," dans Proceedings of Cognitive, pp. 1--7, mars 2015. Manuscrit.

C. Yu, V. Gripon, X. Jiang et H. Jégou, "Neural Associative Memories as Accelerators for Binary Vector Search," dans Proceedings of Cognitive, pp. 85--89, mars 2015. Manuscrit.

A. Aboudib, V. Gripon et B. Tessiau, "Implementing Relational-Algebraic Operators for Improving Cognitive Abilities in Networks of Neural Cliques," dans Proceedings of Cognitive, pp. 36--41, mars 2015. Manuscrit.

2014

H. Jarollahi, N. Onizawa, V. Gripon, T. Hanyu et W. J. Gross, "Algorithm and Architecture for a Multiple-Field Context-Driven Search Engine Using Fully-Parallel Clustered Associative Memories," dans Proceedings of SiPS, pp. 1--6, octobre 2014. Manuscrit.

V. Gripon, V. Skachek et M. Rabbat, "Sparse Binary Matrices as Efficient Associative Memories," dans Proceedings of the 52nd Allerton conference, pp. 499--504, octobre 2014. Manuscrit.

C. Berrou, O. Dufor, V. Gripon et X. Jiang, "Information, Noise, Coding, Modulation: What about the Brain?," dans Proceedings of the 8th symposium on Turbo Codes and Iterative Information Processing, pp. 167--172, août 2014. Manuscrit.

B. Boguslawski, V. Gripon, F. Seguin et F. Heitzmann, "Huffman Coding for Storing Non-uniformly Distributed Messages in Networks of Neural Cliques," dans proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, volume 1, pp. 262--268, juillet 2014. Manuscrit.

Z. Yao, V. Gripon et M. Rabbat, "A GPU-based Associative Memory using Sparse Neural Networks," dans Proceedings of the PCNN-14 conference, pp. 688--692, juillet 2014. Manuscrit.

F. Leduc-Primeau, V. Gripon, M. Rabbat et W. Gross, "Cluster-based Associative Memories Built From Unreliable Storage," dans ICASSP, pp. 8370--8374, mai 2014. Manuscrit.

M. Rabbat et V. Gripon, "Towards a Spectral Characterization of Signals Supported on Small-World Networks," dans ICASSP, pp. 4793--4797, mai 2014. Manuscrit.

A. Aboudib, V. Gripon et X. Jiang, "A study of retrieval algorithms of sparse messages in networks of neural cliques," dans Proceedings of Cognitive 2014, pp. 140--146, mai 2014. Manuscrit.

2013

V. Gripon, V. Skachek et M. G. Rabbat, "Sparse Structured Associative Memories as Efficient Set-Membership Data Structures," dans Actes de la 51ème conférence Allerton, pp. 500--505, octobre 2013. Manuscrit.

V. Gripon et M. Rabbat, "Maximum Likelihood Associative Memories," dans Proceedings of Information Theory Workshop, pp. 1--5, septembre 2013. Manuscrit.

V. Gripon et X. Jiang, "Mémoires associatives pour observations floues," dans Actes du XXIVème colloque Gretsi, septembre 2013. Manuscrit.

V. Gripon et M. Rabbat, "Reconstructing a Graph from Path Traces," dans Proceedings of International Symposium on Information Theory, pp. 2488--2492, juillet 2013. Manuscrit.

H. Jarollahi, V. Gripon, N. Onizawa et W. J. Gross, "A Low-Power Content-Adressable-Memory Based on Clustered-Sparse-Networks," dans Proceedings of 24th International Conference on Application-specific Systems, Architectures and Processors, pp. 642--653, juin 2013. Manuscrit.

H. Jarollahi, N. Onizawa, V. Gripon et W. J. Gross, "Reduced-complexity binary-weight-coded associative memories," dans Proceedings of International Conference on Acoustics, Speech, and Signal Processing, pp. 2523--2527, mai 2013. Manuscrit.

2012

V. Gripon, M. Rabbat, V. Skachek et W. J. Gross, "Compressing multisets using tries," dans Proceedings of Information Theory Workshop, Lausanne, Suisse, pp. 647--651, septembre 2012. Manuscrit. Présentation.

V. Gripon, V. Skachek, W. J. Gross et M. Rabbat, "Random clique codes," dans Proceedings of 7" International Symposium on Turbo Codes and Iterative Information Processing, Gothenburg, Suède, pp. 121--125, août 2012. Manuscrit. Présentation.

X. Jiang, V. Gripon et C. Berrou, "Learning long sequences in binary neural networks," dans Proceedings of Cognitive 2012, Nice, France, pp. 165--170, juillet 2012. Manuscrit.

H. Jarollahi, N. Onizawa, V. Gripon et W. J. Gross, "Architecture and Implementation of an Associative Memory Using Sparse Clustered Networks," dans Proceedings of IEEE International Symposium on Circuits and Systems, pp. 2901--2904, mai 2012. Manuscrit. Présentation.

V. Gripon et C. Berrou, "Nearly-optimal associative memories based on distributed constant weight codes," dans Proceedings of Information Theory and Applications Workshop, San Diego, CA, USA, pp. 269--273, février 2012. Manuscrit. Présentation.

2011

V. Gripon et C. Berrou, "A simple and efficient way to store many messages using neural cliques," dans Proceedings of IEEE Symposium on Computational Intelligence, Cognitive Algorithms, Mind, and Brain, Paris, France, pp. 54--58, avril 2011. Manuscrit. Présentation.

2010

C. Berrou et V. Gripon, "Coded Hopfield networks," dans Proceedings of 6" International Symposium on Turbo Codes and Iterative Information Processing, Brest, France, pp. 1--5, septembre 2010. Manuscrit.

2009

V. Gripon et O. Serre, "Qualitative Concurrent Stochastic Games with Imperfect Information," dans Proceedings of 36th International Colloquium of Automata, Languages and Programming, Springer, Lecture Notes in Computer Science, Rhodes, Grèce, pp. 200--211, juillet 2009. Manuscrit.

Un Mixup Local pour empêcher les intrusions de variétés

R. Baena, L. Drumetz et V. Gripon, "Un Mixup Local pour empêcher les intrusions de variétés," dans GRETSI, 2022.

Deployed in the context of supervised learning, Mixup is a data-dependent regularization technique that consists in linearly interpolating input samples and associated outputs. It has been shown to improve accuracy when used to train on standard machine learning datasets. However, authors have pointed out that Mixup can produce out-of-distribution virtual samples and even contradictions in the augmented training set, potentially resulting in adversarial effects. In this paper, we introduce Local Mixup in which distant input samples are weighted down when computing the loss. In constrained settings we demonstrate that Local Mixup can create a trade-off between bias and variance, with the extreme cases reducing to vanilla training and classical Mixup. Using standardized computer vision benchmarks, we also show that Local Mixup can improve accuracy.

Télécharger le manuscrit.

Bibtex
@inproceedings{BaeDruGri2022,
  author = {Raphael Baena and Lucas Drumetz and
Vincent Gripon},
  title = {Un Mixup Local pour empêcher les
intrusions de variétés},
  booktitle = {GRETSI},
  year = {2022},
}

Classes adversaires dans l'apprentissage avec peu d'exemples

R. Lafargue, J. Diguet, V. Gripon et B. Pasdeloup, "Classes adversaires dans l'apprentissage avec peu d'exemples," dans GRETSI, 2022.

L'apprentissage avec peu d'exemples est un problème de l'apprentissage automatique où très peu d'exemples étiquetés sont disponibles. Souvent, les connaissances d'un modèle pré-entraîné avec beaucoup de données sur des classes disjointes sont transférées pour répondre au problème. Il est communément admis que, plus ces données de pré-entraînement sont nombreuses et variées, meilleure sera la performance. Au contraire, nous montrons que moins peut faire mieux. Au travers d'expériences et d'une modélisation basée sur le rapport signal à bruit, nous montrons qu'un choix plus judicieux de ces données peut être fait.

Télécharger le manuscrit.

Bibtex
@inproceedings{LafDigGriPas2022,
  author = {Raphael Lafargue and Jean-Philippe Diguet
and Vincent Gripon and Bastien Pasdeloup},
  title = {Classes adversaires dans l'apprentissage
avec peu d'exemples},
  booktitle = {GRETSI},
  year = {2022},
}

Élagage de réseaux de neurones convolutifs sur graphes pour la sélection de fréquences significatives pour le décodage d'IRMf

Y. E. Ouahidi, H. Tessier, G. Lioi, N. Farrugia, B. Pasdeloup et V. Gripon, "Élagage de réseaux de neurones convolutifs sur graphes pour la sélection de fréquences significatives pour le décodage d'IRMf," dans GRETSI, 2022.

Graph signal processing defines tools to manipulate signals evolving on irregular domains, such as brain signals, by encompassing the spatial dependencies between regions of interest in the brain. In this work, we are interested in better understanding what are the graph frequencies that are the most useful to decode Functional Magnetic Resonance Imaging (fMRI) signals. For that, we introduce a deep learning architecture and adapt a pruning methodology to automatically identify such frequencies. Our experiments show that low graph frequencies are consistently identified as the most important for fMRI decoding, with a stronger contribution for the functional graph over the structural one.

Télécharger le manuscrit.

Bibtex
@inproceedings{OuaTesLioFarPasGri2022,
  author = {Yassine El Ouahidi and Hugo Tessier and
Giulia Lioi and Nicolas Farrugia and Bastien Pasdeloup
and Vincent Gripon },
  title = {Élagage de réseaux de neurones
convolutifs sur graphes pour la sélection de
fréquences significatives pour le décodage d'IRMf},
  booktitle = {GRETSI},
  year = {2022},
}

Élagage de réseaux profond de neurones par dégradation sélective des pondérations

H. Tessier, V. Gripon, M. Léonardon, M. Arzel, T. Hannagan et D. Bertrand, "Élagage de réseaux profond de neurones par dégradation sélective des pondérations," dans GRETSI, 2022.

Deep neural networks are the standard in machine learning. However, to achieve the best performance, they require millions of trainable parameters, resulting in computationally and memory intensive architectures, and therefore not well suited to certain application contexts such as embedded systems. Parameter pruning during training is a frequently used methodology to reduce these costs, but it induces new problems: sudden performance collapse at high pruning rates, discontinuities between training phases... In this paper we introduce Selective Weight Decay (SWD), a method inspired by Lagrangian smoothing and allowing a progressive and continuous pruning during training. We show on standard datasets the ability of this method to achieve the best performances, especially at the highest pruning rates.

Télécharger le manuscrit.

Bibtex
@inproceedings{TesGriLéArzHanBer2022,
  author = {Hugo Tessier and Vincent Gripon and
Mathieu Léonardon and Matthieu Arzel and Thomas
Hannagan and David Bertrand},
  title = {Élagage de réseaux profond de neurones
par dégradation sélective des pondérations},
  booktitle = {GRETSI},
  year = {2022},
}

Leveraging Structured Pruning of Convolutional Neural Networks

H. T. V. G. M. L. M. A. D. Bertrand et T. Hannagan, "Leveraging Structured Pruning of Convolutional Neural Networks," dans IEEE SiPS, Rennes, France, novembre 2022.

Structured pruning is a popular method to reduce the cost of convolutional neural networks, that are the state of the art in many computer vision tasks. However, depending on the architecture, pruning introduces dimensional discrepancies which prevent the actual reduction of pruned networks. To tackle this problem, we propose a method that is able to take any structured pruning mask and generate a network that does not encounter any of these problems and can be leveraged efficiently. We provide an accurate description of our solution and show results of gains, in energy consumption and inference time on embedded hardware, of pruned convolutional neural networks.

Télécharger le manuscrit.

Bibtex
@inproceedings{BerHan202211,
  author = {Hugo Tessier, Vincent Gripon, Mathieu
Leonardon, Matthieu Arzel, David Bertrand and Thomas
Hannagan},
  title = {Leveraging Structured Pruning of
Convolutional Neural Networks},
  booktitle = {IEEE SiPS},
  year = {2022},
  address = {Rennes, France},
  month = {November},
}

Inter-Operability of Compression Techniques for Efficient Deployment of CNNs on Microcontrollers

H. Y. H. L. B. M. Léonardon et V. Gripon, "Inter-Operability of Compression Techniques for Efficient Deployment of CNNs on Microcontrollers," dans SYSINT: International Conference on System-Integrated Intelligence, Genova, Italy, pp. 543--552, septembre 2022.

Machine Learning (ML) has become state of the art for various tasks, including classification of accelerometer data. In the world of Internet of Things (IoT), the available hardware with low-power consumption is often microcontrollers. However, one of the challenges for embedding machine learning on microcontrollers is that the available memory space is very limited, and this memory is also occupied by the rest of the software elements needed in the IoT device. The problem is then to design ML architectures that have a very low memory footprint, while maintaining a low error rate. In this paper, a methodology is proposed towards the deployment of efficient machine learning on microcontrollers. Then, such methodology is used to investigate the effect of using compression techniques mainly pruning, quantization, and coding on the memory budget. Indeed, we know that these techniques reduce the model size, but not how these techniques interoperate to reach the best accuracy to memory trade-off. A Convolutional Neural Network (CNN) and a Human Activity Recognition (HAR) application has been adopted for the validation of the study .

Télécharger le manuscrit.

Bibtex
@inproceedings{LéGri20229,
  author = {Hamoud Younes, Hugo Le Blevec, Mathieu
Léonardon and Vincent Gripon},
  title = {Inter-Operability of Compression Techniques
for Efficient Deployment of CNNs on Microcontrollers},
  booktitle = {SYSINT: International Conference on
System-Integrated Intelligence},
  year = {2022},
  pages = {543--552},
  address = {Genova, Italy},
  month = {September},
}

Energy Consumption Analysis of pruned Semantic Segmentation Networks on an Embedded GPU

H. T. V. G. M. L. M. A. D. Bertrand et T. Hannagan, "Energy Consumption Analysis of pruned Semantic Segmentation Networks on an Embedded GPU," dans SYSINT: International Conference on System-Integrated Intelligence, Genova, Italy, pp. 553--563, septembre 2022.

Deep neural networks are the state of the art in many computer vision tasks. Their deployment in the context of autonomous vehicles is of particular interest, since their limitations in terms of energy consumption prohibit the use of very large networks, that typically reach the best performance. A common method to reduce the complexity of these architectures, without sacrificing accuracy, is to rely on pruning, in which the least important portions are eliminated. There is a large literature on the subject, but interestingly few works have measured the actual impact of pruning on energy. In this work, we are interested in measuring it in the specific context of semantic segmentation for autonomous driving, using the Cityscapes dataset. To this end, we analyze the impact of recently proposed structured pruning methods when trained architectures are deployed on a Jetson Xavier embedded GPU.

Télécharger le manuscrit.

Bibtex
@inproceedings{BerHan20229,
  author = {Hugo Tessier, Vincent Gripon, Mathieu
Léonardon, Matthieu Arzel, David Bertrand and Thomas
Hannagan},
  title = {Energy Consumption Analysis of pruned
Semantic Segmentation Networks on an Embedded GPU},
  booktitle = {SYSINT: International Conference on
System-Integrated Intelligence},
  year = {2022},
  pages = {553--563},
  address = {Genova, Italy},
  month = {September},
}

A Local Mixup to Prevent Manifold Intrusion

R. B. L. Drumetz et V. Gripon, "A Local Mixup to Prevent Manifold Intrusion," dans EUSIPCO, pp. 1372--1376, août 2022.

Deployed in the context of supervised learning, Mixup is a data-dependent regularization technique that consists in linearly interpolating input samples and associated outputs. It has been shown to improve accuracy when used to train on standard machine learning datasets. However, authors have pointed out that Mixup can produce out-of-distribution virtual samples and even contradictions in the augmented training set, potentially resulting in adversarial effects. In this paper, we introduce Local Mixup in which distant input samples are weighted down when computing the loss. In constrained settings we demonstrate that Local Mixup can create a trade-off between bias and variance, with the extreme cases reducing to vanilla training and classical Mixup. Using standardized computer vision benchmarks, we also show that Local Mixup can improve accuracy.

Télécharger le manuscrit.

Bibtex
@inproceedings{DruGri20228,
  author = {Raphael Baena, Lucas Drumetz and Vincent
Gripon},
  title = {A Local Mixup to Prevent Manifold
Intrusion},
  booktitle = {EUSIPCO},
  year = {2022},
  pages = {1372--1376},
  month = {August},
}

Investigating the Not-So-Obvious Effects of Structured Pruning

H. Tessier, V. Gripon, M. Léonardon, M. Arzel, D. Bertrand et T. Hannagan, "Investigating the Not-So-Obvious Effects of Structured Pruning," dans ICML HAET Workshop, juillet 2022.

Structured pruning is a popular method to reduce the cost of convolutional neural networks. However, depending on the architecture, pruning introduces dimensional discrepancies which prevent the actual reduction of pruned networks and mask their true complexity. Most papers in the literature overlook these issues. We propose a method that systematically solves them and generate an operational network. We show through experiments the gap between the theoretical pruning ratio and the actual complexity revealed by our method.

Télécharger le manuscrit.

Bibtex
@inproceedings{TesGriLéArzBerHan20227,
  author = {Hugo Tessier and Vincent Gripon and
Mathieu Léonardon and Matthieu Arzel and David
Bertrand and Thomas Hannagan},
  title = {Investigating the Not-So-Obvious Effects of
Structured Pruning},
  booktitle = {ICML HAET Workshop},
  year = {2022},
  month = {July},
}

Investigating the Not-So-Obvious Effects of Structured Pruning

H. T. V. G. M. L. M. A. D. Bertrand et T. Hannagan, "Investigating the Not-So-Obvious Effects of Structured Pruning," dans ICML - Hardware-aware efficient training (HAET) workshop, juillet 2022.

Structured pruning is a popular method to reduce the cost of convolutional neural networks. However, depending on the architecture, pruning introduces dimensional discrepancies which prevent the actual reduction of pruned networks and mask their true complexity. Most papers in the literature overlook these issues. We propose a method that systematically solves them and generate an operational network. We show through experiments the gap between the theoretical pruning ratio and the actual complexity revealed by our method.

Télécharger le manuscrit.

Bibtex
@inproceedings{BerHan20227,
  author = {Hugo Tessier, Vincent Gripon, Mathieu
Léonardon, Matthieu Arzel, David Bertrand and Thomas
Hannagan},
  title = {Investigating the Not-So-Obvious Effects of
Structured Pruning},
  booktitle = {ICML - Hardware-aware efficient
training (HAET) workshop},
  year = {2022},
  month = {July},
}

Towards an Intrinsic Definition of Robustness for a Classifier

T. Giraudon, V. Gripon, M. Löwe et F. Vermet, "Towards an Intrinsic Definition of Robustness for a Classifier," dans IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4015--4019, 2021.

The robustness of classifiers has become a question of paramount importance in the past few years. Indeed, it has been shown that state-of-the-art deep learning architectures can easily be fooled with imperceptible changes to their inputs. Therefore, finding good measures of robustness of a trained classifier is a key issue in the field. In this paper, we point out that averaging the radius of robustness of samples in a validation set is a statistically weak measure. We propose instead to weight the importance of samples depending on their difficulty. We motivate the proposed score by a theoretical case study using logistic regression, where we show that the proposed score is independent of the choice of the samples it is evaluated upon. We also empirically demonstrate the ability of the proposed score to measure robustness of classifiers with little dependence on the choice of samples in more complex settings, including deep convolutional neural networks and real datasets.

Télécharger le manuscrit.

Bibtex
@inproceedings{GirGriLöVer2021,
  author = {Théo Giraudon and Vincent Gripon and
Matthias Löwe and Franck Vermet},
  title = {Towards an Intrinsic Definition of
Robustness for a Classifier},
  booktitle = {IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP)},
  year = {2021},
  pages = {4015--4019},
}

Leveraging the Feature Distribution in Transfer-based Few-Shot Learning

Y. Hu, V. Gripon et S. Pateux, "Leveraging the Feature Distribution in Transfer-based Few-Shot Learning," dans International Conference on Artificial Neural Networks, pp. 487--499, septembre 2021.

Few-shot classification is a challenging problem due to the uncertainty caused by using few labelled samples. In the past few years, many methods have been proposed to solve few-shot classification, among which transfer-based methods have proved to achieve the best performance. Following this vein, in this paper we propose a novel transfer-based method that builds on two successive steps: 1) preprocessing the feature vectors so that they become closer to Gaussian-like distributions, and 2) leveraging this preprocessing using an optimal-transport inspired algorithm (in the case of transductive settings). Using standardized vision benchmarks, we prove the ability of the proposed methodology to achieve state-of-the-art accuracy with various datasets, backbone architectures and few-shot settings.

Télécharger le manuscrit.

Bibtex
@inproceedings{HuGriPat20219,
  author = {Yuqing Hu and Vincent Gripon and Stéphane
Pateux},
  title = {Leveraging the Feature Distribution in
Transfer-based Few-Shot Learning},
  booktitle = {International Conference on Artificial
Neural Networks},
  year = {2021},
  pages = {487--499},
  month = {September},
}

Graph topology inference benchmarks for machine learning

C. Lassance, V. Gripon et G. Mateos, "Graph topology inference benchmarks for machine learning," dans IEEE 30th International Workshop on Machine Learning for Signal Processing (MLSP), pp. 1--6, septembre 2021.

Graphs are nowadays ubiquitous in the fields of signal processing and machine learning. As a tool used to express relationships between objects, graphs can be deployed to various ends: (i) clustering of vertices, (ii) semi-supervised classification of vertices, (iii) supervised classification of graph signals, and (iv) denoising of graph signals. However, in many practical cases graphs are not explicitly available and must therefore be inferred from data. Validation is a challenging endeavor that naturally depends on the downstream task for which the graph is learnt. Accordingly, it has often been difficult to compare the efficacy of different algorithms. In this work, we introduce several ease-to-use and publicly released benchmarks specifically designed to reveal the relative merits and limitations of graph inference methods. We also contrast some of the most prominent techniques in the literature.

Télécharger le manuscrit.

Bibtex
@inproceedings{LasGriMat20219,
  author = {Carlos Lassance and Vincent Gripon and
Gonzalo Mateos},
  title = {Graph topology inference benchmarks for
machine learning},
  booktitle = {IEEE 30th International Workshop on
Machine Learning for Signal Processing (MLSP)},
  year = {2021},
  pages = {1--6},
  month = {September},
}

Improving Classification Accuracy with Graph Filtering

M. Hamidouche, C. Lassance, Y. Hu, L. Drumetz, B. Pasdeloup et V. Gripon, "Improving Classification Accuracy with Graph Filtering," dans IEEE International Conference on Image Processing (ICIP), pp. 334--338, septembre 2021.

In machine learning, classifiers are typically susceptible to noise in the training data. In this work, we aim at reducing intra-class noise with the help of graph filtering to improve the classification performance. Considered graphs are obtained by connecting samples of the training set that belong to a same class depending on the similarity of their representation in a latent space. We show that the proposed graph filtering methodology has the effect of asymptotically reducing intra-class variance, while maintaining the mean. While our approach applies to all classification problems in general, it is particularly useful in few-shot settings, where intra-class noise can have a huge impact due to the small sample selection. Using standardized benchmarks in the field of vision, we empirically demonstrate the ability of the proposed method to slightly improve state-of-the-art results in both cases of few-shot and standard classification.

Télécharger le manuscrit.

Bibtex
@inproceedings{HamLasHuDruPasGri20219,
  author = {Mounia Hamidouche and Carlos Lassance and
Yuqing Hu and Lucas Drumetz and Bastien Pasdeloup and
Vincent Gripon},
  title = {Improving Classification Accuracy with
Graph Filtering},
  booktitle = {IEEE International Conference on Image
Processing (ICIP)},
  year = {2021},
  pages = {334--338},
  month = {September},
}

Few-shot Decoding of Brain Activation Maps

M. Bontonou, G. Lioi, N. Farrugia et V. Gripon, "Few-shot Decoding of Brain Activation Maps," dans 29th European Signal Processing Conference (EUSIPCO), pp. 1326-1330, août 2021.

Few-shot learning addresses problems for which a limited number of training examples are available. So far, the field has been mostly driven by applications in computer vision. Here, we are interested in adapting recently introduced few-shot methods to solve problems dealing with neuroimaging data, a promising application field. To this end, we create a neuroimaging benchmark dataset for few-shot learning and compare multiple learning paradigms, including meta-learning, as well as various backbone networks. Our experiments show that few-shot methods are able to efficiently decode brain signals using few examples, which paves the way for a number of applications in clinical and cognitive neuroscience, such as identifying biomarkers from brain scans or understanding the generalization of brain representations across a wide range of cognitive tasks.

Télécharger le manuscrit.

Bibtex
@inproceedings{BonLioFarGri20218,
  author = {Myriam Bontonou and Giulia Lioi and
Nicolas Farrugia and Vincent Gripon},
  title = {Few-shot Decoding of Brain Activation
Maps},
  booktitle = {29th European Signal Processing
Conference (EUSIPCO)},
  year = {2021},
  pages = {1326-1330},
  month = {August},
}

Inferring Graph Signal Translations as Invariant Transformations for Classification Tasks

R. Baena, L. Drumetz et V. Gripon, "Inferring Graph Signal Translations as Invariant Transformations for Classification Tasks," dans 29th European Signal Processing Conference (EUSIPCO), pp. 2169--2173, août 2021.

The field of Graph Signal Processing (GSP) has proposed tools to generalize harmonic analysis to complex domains represented through graphs. Among these tools are translations, which are required to define many others. Most works propose to define translations using solely the graph structure (i.e. edges). Such a problem is ill-posed in general as a graph conveys information about neighborhood but not about directions. In this paper, we propose to infer translations as edge-constrained operations that make a supervised classification problem invariant using a deep learning framework. As such, our methodology uses both the graph structure and labeled signals to infer translations. We perform experiments with regular 2D images and abstract hyperlink networks to show the effectiveness of the proposed methodology in inferring meaningful translations for signals supported on graphs.

Télécharger le manuscrit.

Bibtex
@inproceedings{BaeDruGri20218,
  author = {Raphael Baena and Lucas Drumetz and
Vincent Gripon},
  title = {Inferring Graph Signal Translations as
Invariant Transformations for Classification Tasks},
  booktitle = {29th European Signal Processing
Conference (EUSIPCO)},
  year = {2021},
  pages = {2169--2173},
  month = {August},
}

Using Deep Neural Networks to Predict and Improve the Performance of Polar Codes

M. Léonardon et V. Gripon, "Using Deep Neural Networks to Predict and Improve the Performance of Polar Codes," dans 11th International Symposium on Topics in Coding (ISTC), pp. 1--5, août 2021.

Polar codes can theoretically achieve very competitive Frame Error Rates. In practice, their performance may depend on the chosen decoding procedure, as well as other parameters of the communication system they are deployed upon. As a consequence, designing efficient polar codes for a specific context can quickly become challenging. In this paper, we introduce a methodology that consists in training deep neural networks to predict the frame error rate of polar codes based on their frozen bit construction sequence. We introduce an algorithm based on Projected Gradient Descent that leverages the gradient of the neural network function to generate promising frozen bit sequences. We showcase on generated datasets the ability of the proposed methodology to produce codes more efficient than those used to train the neural networks, even when the latter are selected among the most efficient ones.

Télécharger le manuscrit.

Bibtex
@inproceedings{LéGri20218,
  author = {Mathieu Léonardon and Vincent Gripon},
  title = {Using Deep Neural Networks to Predict and
Improve the Performance of Polar Codes},
  booktitle = {11th International Symposium on Topics
in Coding (ISTC)},
  year = {2021},
  pages = {1--5},
  month = {August},
}

Attention Based Pruning for Shift Networks

G. B. Hacene, C. Lassance, V. Gripon, M. Courbariaux et Y. Bengio, "Attention Based Pruning for Shift Networks," dans 25th International Conference on Pattern Recognition (ICPR), pp. 4054--4061, janvier 2021.

In many application domains such as computer vision, Convolutional Layers (CLs) are key to the accuracy of deep learning methods. However, it is often required to assemble a large number of CLs, each containing thousands of parameters, in order to reach state-of-the-art accuracy, thus resulting in complex and demanding systems that are poorly fitted to resource-limited devices. Recently, methods have been proposed to replace the generic convolution operator by the combination of a shift operation and a simpler 1x1 convolution. The resulting block, called Shift Layer (SL), is an efficient alternative to CLs in the sense it allows to reach similar accuracies on various tasks with faster computations and fewer parameters. In this contribution, we introduce Shift Attention Layers (SALs), which extend SLs by using an attention mechanism that learns which shifts are the best at the same time the network function is trained. We demonstrate SALs are able to outperform vanilla SLs (and CLs) on various object recognition benchmarks while significantly reducing the number of float operations and parameters for the inference.

Télécharger le manuscrit.

Bibtex
@inproceedings{HacLasGriCouBen20211,
  author = {Ghouthi Boukli Hacene and Carlos Lassance
and Vincent Gripon and Matthieu Courbariaux and Yoshua
Bengio},
  title = {Attention Based Pruning for Shift
Networks},
  booktitle = {25th International Conference on
Pattern Recognition (ICPR)},
  year = {2021},
  pages = {4054--4061},
  month = {January},
}

Graph-based Interpolation of Feature Vectors for Accurate Few-Shot Classification

Y. H. V. Gripon et S. Pateux, "Graph-based Interpolation of Feature Vectors for Accurate Few-Shot Classification," dans 25th International Conference on Pattern Recognition (ICPR), pp. 8164--8171, janvier 2021.

In few-shot classification, the aim is to learn models able to discriminate classes using only a small number of labeled examples. In this context, works have proposed to introduce Graph Neural Networks (GNNs) aiming at exploiting the information contained in other samples treated concurrently, what is commonly referred to as the transductive setting in the literature. These GNNs are trained all together with a backbone feature extractor. In this paper, we propose a new method that relies on graphs only to interpolate feature vectors instead, resulting in a transductive learning setting with no additional parameters to train. Our proposed method thus exploits two levels of information: a) transfer features obtained on generic datasets, b) transductive information obtained from other samples to be classified. Using standard few-shot vision classification datasets, we demonstrate its ability to bring significant gains compared to other works.

Télécharger le manuscrit.

Bibtex
@inproceedings{GriPat20211,
  author = {Yuqing Hu, Vincent Gripon and Stéphane
Pateux},
  title = {Graph-based Interpolation of Feature
Vectors for Accurate Few-Shot Classification},
  booktitle = {25th International Conference on
Pattern Recognition (ICPR)},
  year = {2021},
  pages = {8164--8171},
  month = {January},
}

Quantized Guided Pruning for Efficient Hardware Implementations of Convolutional Neural Networks

G. B. Hacene, V. Gripon, M. Arzel, N. Farrugia et Y. Bengio, "Quantized Guided Pruning for Efficient Hardware Implementations of Convolutional Neural Networks," dans 18th IEEE International New Circuits and Systems Conference (NEWCAS), pp. 206--209, 2020.

Convolutional Neural Networks (CNNs) are state-of-the-art in numerous computer vision tasks such as object classification and detection. However, the large amount of parameters they contain leads to a high computational complexity and strongly limits their usability in budget-constrained devices such as embedded devices. In this paper, we propose a combination of a new pruning technique and a quantization scheme that effectively reduce the complexity and memory usage of convolutional layers of CNNs, and replace the complex convolutional operation by a low-cost multiplexer. We perform experiments on the CIFAR10, CIFAR100 and SVHN and show that the proposed method achieves almost state-of-the-art accuracy, while drastically reducing the computational and memory footprints. We also propose an efficient hardware architecture to accelerate CNN operations. The proposed hardware architecture is a pipeline and accommodates multiple layers working at the same time to speed up the inference process.

Télécharger le manuscrit.

Bibtex
@inproceedings{HacGriArzFarBen2020,
  author = {Ghouthi Boukli Hacene and Vincent Gripon
and Matthieu Arzel and Nicolas Farrugia and Yoshua
Bengio},
  title = {Quantized Guided Pruning for Efficient
Hardware Implementations of Convolutional Neural
Networks},
  booktitle = {18th IEEE International New Circuits
and Systems Conference (NEWCAS)},
  year = {2020},
  pages = {206--209},
}

BitPruning: Learning Bitlengths for Aggressive and Accurate Quantization

M. Nikolić, G. B. Hacene, C. Bannon, A. D. Lascorz, M. Courbariaux, Y. Bengio, V. Gripon et A. Moshovos, "BitPruning: Learning Bitlengths for Aggressive and Accurate Quantization," dans ArXiv Preprint, 2020.

Neural networks have demonstrably achieved state-of-the art accuracy using low-bitlength integer quantization, yielding both execution time and energy benefits on existing hardware designs that support short bitlengths. However, the question of finding the minimum bitlength for a desired accuracy remains open. We introduce a training method for minimizing inference bitlength at any granularity while maintaining accuracy. Namely, we propose a regularizer that penalizes large bitlength representations throughout the architecture and show how it can be modified to minimize other quantifiable criteria, such as number of operations or memory footprint. We demonstrate that our method learns thrifty representations while maintaining accuracy. With ImageNet, the method produces an average per layer bitlength of 4.13, 3.76 and 4.36 bits on AlexNet, ResNet18 and MobileNet V2 respectively, remaining within 2.0%, 0.5% and 0.5% of the base TOP-1 accuracy.

Télécharger le manuscrit.

Bibtex
@inproceedings{NikHacBanLasCouBenGriMos2020,
  author = {Miloš Nikolić and Ghouthi Boukli Hacene
and Ciaran Bannon and Alberto Delmas Lascorz and
Matthieu Courbariaux and Yoshua Bengio and Vincent
Gripon and Andreas Moshovos},
  title = {BitPruning: Learning Bitlengths for
Aggressive and Accurate Quantization},
  booktitle = {ArXiv Preprint},
  year = {2020},
}

Ranking Deep Learning Generalization using Label Variation in Latent Geometry Graphs

C. Lassance, L. Béthune, M. Bontonou, M. Hamidouche et V. Gripon, "Ranking Deep Learning Generalization using Label Variation in Latent Geometry Graphs," dans ArXiv Preprint, 2020.

Measuring the generalization performance of a Deep Neural Network (DNN) without relying on a validation set is a difficult task. In this work, we propose exploiting Latent Geometry Graphs (LGGs) to represent the latent spaces of trained DNN architectures. Such graphs are obtained by connecting samples that yield similar latent representations at a given layer of the considered DNN. We then obtain a generalization score by looking at how strongly connected are samples of distinct classes in LGGs. This score allowed us to rank 3rd on the NeurIPS 2020 Predicting Generalization in Deep Learning (PGDL) competition.

Télécharger le manuscrit.

Bibtex
@inproceedings{LasBéBonHamGri2020,
  author = {Carlos Lassance and Louis Béthune and
Myriam Bontonou and Mounia Hamidouche and Vincent
Gripon},
  title = {Ranking Deep Learning Generalization using
Label Variation in Latent Geometry Graphs},
  booktitle = {ArXiv Preprint},
  year = {2020},
}

GPU-based Self-Organizing Maps for Post-Labeled Few-Shot Unsupervised Learning

L. Khacef, V. Gripon et B. Miramond, "GPU-based Self-Organizing Maps for Post-Labeled Few-Shot Unsupervised Learning," dans International Conference on Neural Information Processing, pp. 404--416, août 2020.

Few-shot classification is a challenge in machine learning where the goal is to train a classifier using a very limited number of labeled examples. This scenario is likely to occur frequently in real life, for example when data acquisition or labeling is expensive. In this work, we consider the problem of post-labeled few-shot unsupervised learning, a classification task where representations are learned in an unsupervised fashion, to be later labeled using very few annotated examples. We argue that this problem is very likely to occur on the edge, when the embedded device directly acquires the data, and the expert needed to perform labeling cannot be prompted often. To address this problem, we consider an algorithm consisting of the concatenation of transfer learning with clustering using Self-Organizing Maps (SOMs). We introduce a TensorFlow-based implementation to speed-up the process in multi-core CPUs and GPUs. Finally, we demonstrate the effectiveness of the method using standard off-the-shelf few-shot classification benchmarks.

Télécharger le manuscrit.

Bibtex
@inproceedings{KhaGriMir20208,
  author = {Lyes Khacef and Vincent Gripon and Benoit
Miramond},
  title = {GPU-based Self-Organizing Maps for
Post-Labeled Few-Shot Unsupervised Learning},
  booktitle = {International Conference on Neural
Information Processing},
  year = {2020},
  pages = {404--416},
  month = {August},
}

Efficient Hardware Implementation of Incremental Learning and Inference on Chip

G. B. Hacene, V. Gripon, N. Farrugia, M. Arzel et M. Jezequel, "Efficient Hardware Implementation of Incremental Learning and Inference on Chip," dans 17th IEEE International New Circuits and Systems Conference (NEWCAS), pp. 206--209, juin 2020.

In this paper, we tackle the problem of incrementally learning a classifier, one example at a time, directly on chip. To this end, we propose an efficient hardware implementation of a recently introduced incremental learning procedure that achieves state-of-the-art performance by combining transfer learning with majority votes and quantization techniques. The proposed design is able to accommodate for both new examples and new classes directly on the chip. We detail the hardware implementation of the method (implemented on FPGA target) and show it requires limited resources while providing a significant acceleration compared to using a CPU.


Bibtex
@inproceedings{HacGriFarArzJez20206,
  author = {Ghouthi Boukli Hacene and Vincent Gripon
and Nicolas Farrugia and Matthieu Arzel and Michel
Jezequel},
  title = {Efficient Hardware Implementation of
Incremental Learning and Inference on Chip},
  booktitle = {17th IEEE International New Circuits
and Systems Conference (NEWCAS)},
  year = {2020},
  pages = {206--209},
  month = {June},
}

Deep geometric knowledge distillation with graphs

C. Lassance, M. Bontonou, G. B. Hacene, V. Gripon, J. Tang et A. Ortega, "Deep geometric knowledge distillation with graphs," dans IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8484-8488, mai 2020.

In most cases deep learning architectures are trained disregarding the amount of operations and energy consumption. However, some applications, like embedded systems, can be resource-constrained during inference. A popular approach to reduce the size of a deep learning architecture consists in distilling knowledge from a bigger network (teacher) to a smaller one (student). Directly training the student to mimic the teacher representation can be effective, but it requires that both share the same latent space dimensions. In this work, we focus instead on relative knowledge distillation (RKD), which considers the geometry of the respective latent spaces, allowing for dimension-agnostic transfer of knowledge. Specifically we introduce a graph-based RKD method, in which graphs are used to capture the geometry of latent spaces. Using classical computer vision benchmarks, we demonstrate the ability of the proposed method to efficiently distillate knowledge from the teacher to the student, leading to better accuracy for the same budget as compared to existing RKD alternatives.


Bibtex
@inproceedings{LasBonHacGriTanOrt20205,
  author = {Carlos Lassance and Myriam Bontonou and
Ghouthi Boukli Hacene and Vincent Gripon and Jian Tang
and Antonio Ortega},
  title = {Deep geometric knowledge distillation with
graphs},
  booktitle = {IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP)},
  year = {2020},
  pages = {8484-8488},
  month = {May},
}

Transfer Learning with Sparse Associative Memories

Q. Jodelet, V. Gripon et M. Hagiwara, "Transfer Learning with Sparse Associative Memories," dans International Conference on Artificial Neural Networks, pp. 497--512, 2019.

In this paper, we introduce a novel layer designed to be used as the output of pre-trained neural networks in the context of classification. Based on Associative Memories, this layer can help design Deep Neural Networks which support incremental learning and that can be (partially) trained in real time on embedded devices. Experiments on the ImageNet dataset and other different domain specific datasets show that it is possible to design more flexible and faster-to-train Neural Networks at the cost of a slight decrease in accuracy.

Télécharger le manuscrit.

Bibtex
@inproceedings{JodGriHag2019,
  author = {Quentin Jodelet and Vincent Gripon and
Masafumi Hagiwara},
  title = {Transfer Learning with Sparse Associative
Memories},
  booktitle = {International Conference on Artificial
Neural Networks},
  year = {2019},
  pages = {497--512},
}

Robustesse des réseaux de neurones profonds aux défaillances mémoire

G. B. Hacene, F. Leduc-Primeau, A. B. Soussia, V. Gripon et F. Gagnon, "Robustesse des réseaux de neurones profonds aux défaillances mémoire," dans GRETSI, août 2019.

Les Réseaux de Neurones Profonds (RNPs) reposent sur un grand nombre de paramètres et de calculs, de sorte que leur implémentation matérielle sur des systèmes contraints en ressources représente un défi. Dans cet article, nous nous intéressons à la possibilité de réduire la tension d’alimentation des mémoires utilisées dans un tel système pour diminuer la consommation énergétique. Nous analysons la robustesse de certains RNPs aux conséquences de ces réductions, et proposons une solution pour maintenir un haut niveau de précision. Nos expériences montrent clairement l’intérêt d’un système opérant dans un régime défectueux afin de réduire la consommation d’énergie, sans entraîner des pertes en précision.

Télécharger le manuscrit.

Bibtex
@inproceedings{HacLedSouGriGag201908,
  author = {Ghouthi Boukli Hacene and Francois
Leduc-Primeau and Amal Ben Soussia and Vincent Gripon
and Francois Gagnon},
  title = {Robustesse des réseaux de neurones
profonds aux défaillances mémoire},
  booktitle = {GRETSI},
  year = {2019},
  month = {August},
}

Un modèle unifié pour la classification de signaux sur graphe avec de l’apprentissage profond

M. Bontonou, C. Lassance, J. Vialatte et V. Gripon, "Un modèle unifié pour la classification de signaux sur graphe avec de l’apprentissage profond," dans GRETSI, août 2019.

We propose to review some of the major models performing supervised classification of graph signals in deep learning. The goal of supervised classification of graph signals is to classify a signal whose components are defined on the vertices of a graph. Convolutional Neural Networks (CNN) are very efficient at classifying signals defined on a grid graph (such as images). However, as they can not be used on signals defined on an arbitrary graph, other models have emerged, aiming to extend its properties to any graph. The overall purpose of this study is to provide a comparison of some of the major models in supervised classification of graph signals. We also introduce a unified formalism.

Télécharger le manuscrit.

Bibtex
@inproceedings{BonLasViaGri201908,
  author = {Myriam Bontonou and Carlos Lassance and
Jean-Charles Vialatte and Vincent Gripon},
  title = {Un modèle unifié pour la classification
de signaux sur graphe avec de l’apprentissage
profond},
  booktitle = {GRETSI},
  year = {2019},
  month = {August},
}

Robustesse structurelle des architectures d’apprentissage profond

C. Lassance, V. Gripon, J. Tang et A. Ortega, "Robustesse structurelle des architectures d’apprentissage profond," dans GRETSI, août 2019.

Les réseaux de neurones profonds sont devenus les références dans beaucoup de problèmes d’apprentissage machine. Malheureusement, ils sont sensibles à divers types de bruits ou à des déformations des entrées. Dans ce travail, nous introduisons une nouvelle définition de robustesse caractérisant la constante de Lipschitz de la fonction du réseau dans un sous-ensemble restreint de son domaine de définition. Nous comparons cette définition à celles existantes, et discutons des liens avec différentes méthodes introduites dans la littérature afin accroître la robustesse des réseaux.

Télécharger le manuscrit.

Bibtex
@inproceedings{LasGriTanOrt201908,
  author = {Carlos Lassance and Vincent Gripon and
Jian Tang and Antonio Ortega},
  title = {Robustesse structurelle des architectures
d’apprentissage profond},
  booktitle = {GRETSI},
  year = {2019},
  month = {August},
}

Comparing linear structure-based and data-driven latent spatial representations for sequence prediction

M. Bontonou, C. Lassance, V. Gripon et N. Farrugia, "Comparing linear structure-based and data-driven latent spatial representations for sequence prediction," dans Wavelets and Sparsity XVIII, San Diego, USA, août 2019.

Predicting the future of Graph-supported Time Series (GTS) is a key challenge in many domains, such as climate monitoring, finance or neuroimaging. Yet it is a highly difficult problem as it requires to account jointly for time and graph (spatial) dependencies. To simplify this process, it is common to use a two-step procedure in which spatial and time dependencies are dealt with separately. In this paper, we are interested in comparing various linear spatial representations, namely structure-based ones and data-driven ones, in terms of how they help predict the future of GTS. To that end, we perform experiments with various datasets including spontaneous brain activity and raw videos.

Télécharger le manuscrit.

Bibtex
@inproceedings{BonLasGriFar20198,
  author = {Myriam Bontonou and Carlos Lassance and
Vincent Gripon and Nicolas Farrugia},
  title = {Comparing linear structure-based and
data-driven latent spatial representations for
sequence prediction},
  booktitle = {Wavelets and Sparsity XVIII},
  year = {2019},
  address = {San Diego, USA},
  month = {August},
}

Structural Robustness for Deep Learning Architectures

C. Lassance, V. Gripon, J. Tang et A. Ortega, "Structural Robustness for Deep Learning Architectures," dans Data Science Workshop, pp. 125--129, juin 2019.

Deep Networks have been shown to provide state-of-the-art performance in many machine learning challenges. Unfortunately, they are susceptible to various types of noise, including adversarial attacks and corrupted inputs. In this work we introduce a formal definition of robustness which can be viewed as a localized Lipschitz constant of the network function, quantified in the domain of the data to be classified. We compare this notion of robustness to existing ones, and study its connections with methods in the literature. We evaluate this metric by performing experiments on various competitive vision datasets.

Télécharger le manuscrit.

Bibtex
@inproceedings{LasGriTanOrt20196,
  author = {Carlos Lassance and Vincent Gripon and
Jian Tang and Antonio Ortega},
  title = {Structural Robustness for Deep Learning
Architectures},
  booktitle = {Data Science Workshop},
  year = {2019},
  pages = {125--129},
  month = {June},
}

Introducing Graph Smoothness Loss for Training Deep Learning Architectures

M. Bontonou, C. Lassance, G. B. Hacene, V. Gripon, J. Tang et A. Ortega, "Introducing Graph Smoothness Loss for Training Deep Learning Architectures," dans Data Science Workshop, pp. 160--164, juin 2019.

We introduce a novel loss function for training deep learning architectures to perform classification. It consists in minimizing the smoothness of label signals on similarity graphs built at the output of the architecture. Equivalently, it can be seen as maximizing the distances between the network function images of training inputs from distinct classes. As such, only distances between pairs of examples in distinct classes are taken into account in the process, and the training does not prevent inputs from the same class to be mapped to distant locations in the output domain. We show that this loss leads to similar performance in classification as architectures trained using the classical cross-entropy, while offering interesting degrees of freedom and properties. We also demonstrate the interest of the proposed loss to increase robustness of trained architectures to deviations of the inputs.

Télécharger le manuscrit.

Bibtex
@inproceedings{BonLasHacGriTanOrt20196,
  author = {Myriam Bontonou and Carlos Lassance and
Ghouthi Boukli Hacene and Vincent Gripon and Jian Tang
and Antonio Ortega},
  title = {Introducing Graph Smoothness Loss for
Training Deep Learning Architectures},
  booktitle = {Data Science Workshop},
  year = {2019},
  pages = {160--164},
  month = {June},
}

Training modern deep neural networks for memory-fault robustness

G. B. Hacene, F. Leduc-Primeau, A. B. Soussia, V. Gripon et F. Gagnon, "Training modern deep neural networks for memory-fault robustness," dans Proceedings of the IEEE International Symposium on Circuits and Systems, pp. 1--5, mai 2019.

Because deep neural networks (DNNs) rely on a large number of parameters and computations, their implementation in energy-constrained systems is challenging. In this paper, we investigate the solution of reducing the supply voltage of the memories used in the system, which results in bit-cell faults. We explore the robustness of state-of-the-art DNN architectures towards such defects and propose a regularizer meant to mitigate their effects on accuracy. Our experiments clearly demonstrate the interest of operating the system in a faulty regime to save energy without reducing accuracy.


Bibtex
@inproceedings{HacLedSouGriGag20195,
  author = {Ghouthi Boukli Hacene and François
Leduc-Primeau and Amal Ben Soussia and Vincent Gripon
and François Gagnon},
  title = {Training modern deep neural networks for
memory-fault robustness},
  booktitle = {Proceedings of the IEEE International
Symposium on Circuits and Systems},
  year = {2019},
  pages = {1--5},
  month = {May},
}

A Unified Deep Learning Formalism For Processing Graph Signals

M. Bontonou, C. Lassance, J. Vialatte et V. Gripon, "A Unified Deep Learning Formalism For Processing Graph Signals," dans SDM Special Session on Graph Neural Networks, mai 2019.

Convolutional Neural Networks are very efficient at processing signals defined on a discrete Euclidean space (such as images). However, as they can not be used on signals defined on an arbitrary graph, other models have emerged, aiming to extend its properties. We propose to review some of the major deep learning models designed to exploit the underlying graph structure of signals. We express them in a unified formalism, giving them a new and comparative reading.

Télécharger le manuscrit.

Bibtex
@inproceedings{BonLasViaGri20195,
  author = {Myriam Bontonou and Carlos Lassance and
Jean-Charles Vialatte and Vincent Gripon},
  title = {A Unified Deep Learning Formalism For
Processing Graph Signals},
  booktitle = {SDM Special Session on Graph Neural
Networks},
  year = {2019},
  month = {May},
}

Graph-Projected Signal Processing

N. Grelier, C. R. K. Lassance, E. Dupraz et V. Gripon, "Graph-Projected Signal Processing," dans IEEE GlobalSIP, pp. 763--767, 2018.

In the past few years, Graph Signal Processing (GSP) has attracted a lot of interest for its aim at extending Fourier analysis to arbitrary discrete topologies described by graphs. Since it is essentially built upon analogies between classical temporal Fourier transforms and ring graphs spectrum, these extensions do not necessarily yield expected convolution and translation operators when adapted on regular multidimensional domains such as 2D grid graphs. In this paper we are interested in alternate definitions of Fourier transforms on graphs, obtained by projecting vertices to regular metric spaces on which the Fourier transform is already well defined. We compare our method with classical graph Fourier transform and demonstrate its interest for designing accurate convolutional neural networks on graph signals.

Télécharger le manuscrit.

Bibtex
@inproceedings{GreLasDupGri2018,
  author = {Nicolas Grelier and Carlos Rosar Kos
Lassance and Elsa Dupraz and Vincent Gripon},
  title = {Graph-Projected Signal Processing},
  booktitle = {IEEE GlobalSIP},
  year = {2018},
  pages = {763--767},
}

Matching Convolutional Neural Networks without Priors about Data

C. E. R. K. Lassance, J. Vialatte et V. Gripon, "Matching Convolutional Neural Networks without Priors about Data," dans Proceedings of Data Science Workshop, pp. 234--238, 2018.

We propose an extension of Convolutional Neural Networks (CNNs) to graph-structured data, including strided convolutions and data augmentation on graphs. Our method matches the accuracy of state-of-the-art CNNs when applied on images, without any prior about their 2D regular structure. On fMRI data, we obtain a significant gain in accuracy compared with existing graph-based alternatives.

Télécharger le manuscrit.

Bibtex
@inproceedings{LasViaGri2018,
  author = {Carlos Eduardo Rosar Kos Lassance and
Jean-Charles Vialatte and Vincent Gripon},
  title = {Matching Convolutional Neural Networks
without Priors about Data},
  booktitle = {Proceedings of Data Science Workshop},
  year = {2018},
  pages = {234--238},
}

Improving Accuracy of Nonparametric Transfer Learning Via Vector Segmentation

V. Gripon, G. B. Hacene, M. Löwe et F. Vermet, "Improving Accuracy of Nonparametric Transfer Learning Via Vector Segmentation," dans proceedings of ICASSP, pp. 2966-2970, avril 2018.

Transfer learning using deep neural networks as feature extractors has become increasingly popular over the past few years. It allows to obtain state-of-the-art accuracy on datasets too small to train a deep neural network on its own, and it provides cutting edge descriptors that, combined with nonparametric learning methods, allow rapid and flexible deployment of performing solutions in computationally restricted settings. In this paper, we are interested in showing that the features extracted using deep neural networks have specific properties which can be used to improve accuracy of downstream nonparametric learning methods. Namely, we demonstrate that for some distributions where information is embedded in a few coordinates, segmenting feature vectors can lead to better accuracy. We show how this model can be applied to real datasets by performing experiments using three mainstream deep neural network feature extractors and four databases, in vision and audio.

Télécharger le manuscrit.

Bibtex
@inproceedings{GriHacLöVer20184,
  author = {Vincent Gripon and Ghouthi Boukli Hacene
and Matthias Löwe and Franck Vermet},
  title = {Improving Accuracy of Nonparametric
Transfer Learning Via Vector Segmentation},
  booktitle = {proceedings of ICASSP},
  year = {2018},
  pages = {2966-2970},
  month = {April},
}

An Inside Look at Deep Neural Networks using Graph Signal Processing

V. Gripon, A. Ortega et B. Girault, "An Inside Look at Deep Neural Networks using Graph Signal Processing," dans Proceedings of ITA, pp. 1--9, février 2018.

Deep Neural Networks (DNNs) are state-of-the-art in many machine learning benchmarks. Understanding how they perform is a major open question. In this paper, we are interested in using graph signal processing to monitor the intermediate representations obtained in a simple DNN architecture. We compare different metrics and measures and show that smoothness of label signals on k-nearest neighbor graphs are a good candidate to interpret individual layers role in achieving good performance.

Télécharger le manuscrit.

Bibtex
@inproceedings{GriOrtGir20182,
  author = {Vincent Gripon and Antonio Ortega and
Benjamin Girault},
  title = {An Inside Look at Deep Neural Networks
using Graph Signal Processing},
  booktitle = {Proceedings of ITA},
  year = {2018},
  pages = {1--9},
  month = {February},
}

Budget Restricted Incremental Learning with Pre-Trained Convolutional Neural Networks and Binary Associative Memories

G. B. Hacene, V. Gripon, N. Farrugia, M. Arzel et M. Jezequel, "Budget Restricted Incremental Learning with Pre-Trained Convolutional Neural Networks and Binary Associative Memories," dans Proceedings of SIPS, pp. 1063--1073, 2017.

Thanks to their ability to absorb large amounts of data, Convolutional Neural Networks (CNNs) have become state-of-the-art in numerous vision challenges, sometimes even on par with biological vision. They rely on optimisation routines that typically require intensive computational power, thus the question of embedded architectures is a very active field of research. Of particular interest is the problem of incremental learning, where the device adapts to new observations or classes. To tackle this challenging problem, we propose to combine pre-trained CNNs with binary associative memories, using product random sampling as an intermediate between the two methods. The obtained architecture requires significantly less computational power and memory usage than existing counterparts. Moreover, using various challenging vision datasets we show that the proposed architecture is able to perform one-shot learning – and even use only a small portion of the dataset – while keeping very good accuracy.

Télécharger le manuscrit.

Bibtex
@inproceedings{HacGriFarArzJez2017,
  author = {Ghouthi Boukli Hacene and Vincent Gripon
and Nicolas Farrugia and Matthieu Arzel and Michel
Jezequel},
  title = {Budget Restricted Incremental Learning with
Pre-Trained Convolutional Neural Networks and Binary
Associative Memories},
  booktitle = {Proceedings of SIPS},
  year = {2017},
  pages = {1063--1073},
}

Incremental Learning on Chip

G. B. Hacene, V. Gripon, N. Farrugia, M. Arzel et M. Jezequel, "Incremental Learning on Chip," dans Proceedings of GlobalSip, pp. 789--792, 2017.

Learning on chip (LOC) is a challenging problem in which an embedded system learns a model and uses it to process and classify unknown data, while adapting to new observations or classes. It may require intensive computational power to adapt to new data, leading to a complex hardware implementation. We address this issue by introducing an incremental learning method based on the combination of a pre-trained Convolutional Neural Network (CNN) and majority votes, using Product Quantizing (PQ) as a bridge between them. We detail a hardware implementation of the proposed method (validated on a FPGA target) using limited hardware resources while providing substantial processing acceleration compared to a CPU counterpart.

Télécharger le manuscrit.

Bibtex
@inproceedings{HacGriFarArzJez2017,
  author = {Ghouthi Boukli Hacene and Vincent Gripon
and Nicolas Farrugia and Matthieu Arzel and Michel
Jezequel},
  title = {Incremental Learning on Chip},
  booktitle = {Proceedings of GlobalSip},
  year = {2017},
  pages = {789--792},
}

Evaluating Graph Signal Processing for Neuroimaging Through Classification and Dimensionality Reduction

M. Ménoret, N. Farrugia, B. Pasdeloup et V. Gripon, "Evaluating Graph Signal Processing for Neuroimaging Through Classification and Dimensionality Reduction," dans Proceedings of GlobalSip, pp. 618--622, 2017.

Graph Signal Processing (GSP) is a promising framework to analyze multi-dimensional neuroimaging datasets, while taking into account both the spatial and functional dependencies between brain signals. In the present work, we apply dimensionality reduction techniques based on graph representations of the brain to decode brain activity from real and simulated fMRI datasets. We introduce seven graphs obtained from a) geometric structure and/or b) functional connectivity between brain areas at rest, and compare them when performing dimension reduction for classification. We show that mixed graphs using both a) and b) offer the best performance. We also show that graph sampling methods perform better than classical dimension reduction including Principal Component Analysis (PCA) and Independent Component Analysis (ICA).

Télécharger le manuscrit.

Bibtex
@inproceedings{MéFarPasGri2017,
  author = {Mathilde Ménoret and Nicolas Farrugia and
Bastien Pasdeloup and Vincent Gripon},
  title = {Evaluating Graph Signal Processing for
Neuroimaging Through Classification and Dimensionality
Reduction},
  booktitle = {Proceedings of GlobalSip},
  year = {2017},
  pages = {618--622},
}

Learning Local Receptive Fields and their Weight Sharing Scheme on Graphs

J. Vialatte, V. Gripon et G. Coppin, "Learning Local Receptive Fields and their Weight Sharing Scheme on Graphs," dans Proceedings of GlobalSip, pp. 623--627, 2017.

We propose a generic layer formulation that extends the properties of convolutional layers to any domain that can be described by a graph topology. Namely, we use the support of its adjacency matrix to design learnable weight sharing filters able to exploit the underlying structure of signals in the same fashion as for images. The proposed formulation makes it possible to learn the weights of the filter as well as a scheme that controls how they are shared across the graph. We perform validation experiments with image datasets and show that these filters offer performances comparable with convolutional ones.

Télécharger le manuscrit.

Bibtex
@inproceedings{ViaGriCop2017,
  author = {Jean-Charles Vialatte and Vincent Gripon
and Gilles Coppin},
  title = {Learning Local Receptive Fields and their
Weight Sharing Scheme on Graphs},
  booktitle = {Proceedings of GlobalSip},
  year = {2017},
  pages = {623--627},
}

Tropical Graph Signal Processing

V. Gripon, "Tropical Graph Signal Processing," dans Proceedings of the Asilomar conference, pp. 50--54, octobre 2017.

For the past few years, the domain of graph signal processing has extended classical Fourier analysis to domains described by graphs. Most of the results were obtained by analogy with the study of heat propagation. We propose to perform a similar analysis in the context of tropical algebra, widely used in theoretical computer science to monitor propagation processes over graphs of distances. We introduce a Tropical Graph Fourier Transform and prove a few results on graph inference and the existence of a tropical uncertainty principle.

Télécharger le manuscrit.

Bibtex
@inproceedings{Gri201710,
  author = {Vincent Gripon},
  title = {Tropical Graph Signal Processing},
  booktitle = {Proceedings of the Asilomar
conference},
  year = {2017},
  pages = {50--54},
  month = {October},
}

Robust Associative Memories Naturally Occuring From Recurrent Hebbian Networks Under Noise

E. Coyac, V. Gripon, C. Langlais et C. Berrou, "Robust Associative Memories Naturally Occuring From Recurrent Hebbian Networks Under Noise," dans Arxiv Preprint, septembre 2017.

The brain is a noisy system subject to energy constraints. These facts are rarely taken into account when modelling artificial neural networks. In this paper, we are interested in demonstrating that those factors can actually lead to the appearance of robust associative memories. We first propose a simplified model of noise in the brain, taking into account synaptic noise and interference from neurons external to the network. When coarsely quantized, we show that this noise can be reduced to insertions and erasures. We take a neural network with recurrent modifiable connections, and subject it to noisy external inputs. We introduce an energy usage limitation principle in the network as well as consolidated Hebbian learning, resulting in an incremental processing of inputs. We show that the connections naturally formed correspond to state-of-the-art binary sparse associative memories.

Télécharger le manuscrit.

Bibtex
@inproceedings{CoyGriLanBer201709,
  author = {Eliott Coyac and Vincent Gripon and
Charlotte Langlais and Claude Berrou},
  title = {Robust Associative Memories Naturally
Occuring From Recurrent Hebbian Networks Under Noise},
  booktitle = {Arxiv Preprint},
  year = {2017},
  month = {September},
}

Performance of Neural Clique Networks Subject to Synaptic Noise

E. Coyac, V. Gripon, C. Langlais et C. Berrou, "Performance of Neural Clique Networks Subject to Synaptic Noise," dans Proceedings of Cognitive, pp. 4--9, février 2017.

Abstract—Artificial neural networks are so-called because they are supposed to be inspired from the brain and from the ways the neurons work. While some networks are used purely for computational purpose and do not endeavor to be a plausible representation of what happens in the brain, such as deep learning neural networks, others do. However, the question of the noise in the brain and its impact on the functioning of those networks has been little-studied. For example, it is widely known that synapses misfire with a significant probability. We model this noise and study its impact on associative memories powered by neural networks: neural clique networks and Hopfield networks as a reference point. We show that synaptic noise can in fact slightly improve the performance of the decoding process of neural clique networks by avoiding local minima.

Télécharger le manuscrit.

Bibtex
@inproceedings{CoyGriLanBer201702,
  author = {Eliott Coyac and Vincent Gripon and
Charlotte Langlais and Claude Berrou},
  title = {Performance of Neural Clique Networks
Subject to Synaptic Noise},
  booktitle = {Proceedings of Cognitive},
  year = {2017},
  pages = {4--9},
  month = {February},
}

Finding All Matches in a Database using Binary Neural Networks

G. B. Hacene, V. Gripon, N. Farrugia, M. Arzel et M. Jezequel, "Finding All Matches in a Database using Binary Neural Networks," dans Proceedings of Cognitive, pp. 59--64, février 2017.

The most efficient architectures of associative memories are based on binary neural networks. As example, Sparse Clustered Networks (SCNs) are able to achieve almost optimal memory efficiency while providing robust indexation of pieces of information through cliques in a neural network. In the canonical formulation of the associative memory problem, the unique stored message matching a given input probe is to be retrieved. In this paper, we focus on the more general problem of finding all messages matching the given probe. We consider real datasets from which many different messages can match given probes, which cannot be done with uniformly distributed messages due to their unlikelyhood of sharing large common parts with one another. Namely, we implement a crossword dictionary containing 8-letter english words, and a chess endgame dataset using associative memories based on binary neural networks. We explain how to adapt SCNs’ architecture to this challenging dataset and introduce a backtracking procedure to retrieve all completions of the given input. We stress the performance of the proposed method using different measures and discuss the importance of parameters.

Télécharger le manuscrit.

Bibtex
@inproceedings{HacGriFarArzJez201702,
  author = {Ghouthi Boukli Hacene and Vincent Gripon
and Nicolas Farrugia and Matthieu Arzel and Michel
Jezequel},
  title = {Finding All Matches in a Database using
Binary Neural Networks},
  booktitle = {Proceedings of Cognitive},
  year = {2017},
  pages = {59--64},
  month = {February},
}

An Intrinsic Difference Between Vanilla RNNs and GRU Models

T. Stérin, N. Farrugia et V. Gripon, "An Intrinsic Difference Between Vanilla RNNs and GRU Models," dans Proceedings of Cognitive, pp. 76--81, février 2017.

In order to perform well in practice, Recurrent Neural Networks (RNN) require computationally heavy architectures, such as Gated Recurrent Unit (GRU) or Long Short Term Memory (LSTM). Indeed, the original Vanilla model fails to encapsulate middle and long term sequential dependencies. The aim of this paper is to show that gradient training issues, which have motivated the introduction of LSTM and GRU models, are not sufficient to explain the failure of the simplest RNN. Using the example of Reber’s grammar, we propose an experimental measure of both Vanilla and GRU models, which suggest an intrinsic difference in their dynamics. A better mathematical understanding of this difference could lead to more efficient models without compromising performance.

Télécharger le manuscrit.

Bibtex
@inproceedings{StFarGri201702,
  author = {Tristan Stérin and Nicolas Farrugia and
Vincent Gripon},
  title = {An Intrinsic Difference Between Vanilla
RNNs and GRU Models},
  booktitle = {Proceedings of Cognitive},
  year = {2017},
  pages = {76--81},
  month = {February},
}

Neighborhood-Preserving Translations on Graphs

N. Grelier, B. Pasdeloup, J. Vialatte et V. Gripon, "Neighborhood-Preserving Translations on Graphs," dans Proceedings of GlobalSIP, pp. 410--414, octobre 2016.

In many domains (e.g. Internet of Things, neuroimaging) signals are naturally supported on graphs. These graphs usually convey information on similarity between the values taken by the signal at the corresponding vertices. An interest of using graphs is that it allows to define ad hoc operators to perform signal processing. Among them, ones of paramount importance in many tasks are translations. In this paper we propose new definitions of translations on graphs using a few simple properties. Namely we propose to define translations as functions from vertices to adjacent ones, that preserve neighborhood properties of the graph. We show that our definitions, contrary to other works on the subject, match usual translations on grid graphs.


Bibtex
@inproceedings{GrePasViaGri201610,
  author = {Nicolas Grelier and Bastien Pasdeloup and
Jean-Charles Vialatte and Vincent Gripon},
  title = {Neighborhood-Preserving Translations on
Graphs},
  booktitle = {Proceedings of GlobalSIP},
  year = {2016},
  pages = {410--414},
  month = {October},
}

Distributed Coding and Synaptic Pruning

E. Coyac, V. Gripon, C. Langlais et C. Berrou, "Distributed Coding and Synaptic Pruning," dans Proceedings of the 9th International Symposium on Turbo Codes and Iterative Information Processing, pp. 206--210, septembre 2016.

Abstract—This paper deals with the modelization of synaptic pruning in the developing brain, at the informational level. Relying on the hypotheses of clique-based memory and Hebbian learning, we consider several scenarios to try to understand how reliable point-to-point communication may be achieved in the cortex whereas at birth, neurons are connected to a multitude of other neurons, similar to a connection of broadcast type. It is shown that quasi-perfect transfer of information can be obtained in a plausible way using simple rules if a mechanism of synaptic normalization is implemented at the receiver side.

Télécharger le manuscrit.

Bibtex
@inproceedings{CoyGriLanBer20169,
  author = {Eliott Coyac and Vincent Gripon and
Charlotte Langlais and Claude Berrou},
  title = {Distributed Coding and Synaptic Pruning},
  booktitle = {Proceedings of the 9th International
Symposium on Turbo Codes and Iterative Information
Processing},
  year = {2016},
  pages = {206--210},
  month = {September},
}

A Turbo-Inspired Iterative Approach for Correspondence Problems of Image Features

A. Aboudib, V. Gripon et G. Coppin, "A Turbo-Inspired Iterative Approach for Correspondence Problems of Image Features," dans Proceedings of the 9th International Symposium on Turbo Codes and Iterative Information Processing, pp. 226--230, septembre 2016.

Establishing correspondences between image features is a fundamental problem in many computer vision tasks. It is traditionally viewed as a graph matching problem, and solved using an optimization procedure. In this paper, we propose a new approach to solving the correspondence problem from a coding/decoding perspective. We then present an iterative matching algorithm inspired from the turbo-decoding concept. We provide an experimental evaluation of the proposed method, and show that it performs better than state-of-the-art algorithms in the presence of clutter, thanks to turbo-style decoding.

Télécharger le manuscrit.

Bibtex
@inproceedings{AboGriCop20169,
  author = {Ala Aboudib and Vincent Gripon and Gilles
Coppin},
  title = {A Turbo-Inspired Iterative Approach for
Correspondence Problems of Image Features},
  booktitle = {Proceedings of the 9th International
Symposium on Turbo Codes and Iterative Information
Processing},
  year = {2016},
  pages = {226--230},
  month = {September},
}

Assembly Output Codes for Learning Neural Networks

P. Tigréat, C. R. K. Lassance, X. Jiang, V. Gripon et C. Berrou, "Assembly Output Codes for Learning Neural Networks," dans Proceedings of the 9th International Symposium on Turbo Codes and Iterative Information Processing, pp. 285--289, septembre 2016.

Neural network-based classifiers usually encode the class labels of input data via a completely disjoint code, i.e. a binary vector with only one bit associated with each category. We use coding theory to propose assembly codes where each element is associated with several classes, making for better target vectors. These codes emulate the combination of several classifiers, which is a well-known method to improve decision accuracy. Our experiments on data-sets such as MNIST with a multi-layer neural network show that assembly output codes, which are characterized by a higher minimum Hamming distance, result in better classification performance. These codes are also well suited to the use of clustered clique-based networks in category representation.

Télécharger le manuscrit.

Bibtex
@inproceedings{TigLasJiaGriBer20169,
  author = {Philippe Tigréat and Carlos Rosar Kos
Lassance and Xiaoran Jiang and Vincent Gripon and
Claude Berrou},
  title = {Assembly Output Codes for Learning Neural
Networks},
  booktitle = {Proceedings of the 9th International
Symposium on Turbo Codes and Iterative Information
Processing},
  year = {2016},
  pages = {285--289},
  month = {September},
}

Nearest Neighbour Search Using Binary Neural Networks

D. Ferro, V. Gripon et X. Jiang, "Nearest Neighbour Search Using Binary Neural Networks," dans Proceedings of IJCNN, pp. 5106--5112, juillet 2016.

The problem of finding nearest neighbours in terms of Euclidean distance, Hamming distance or other distance metric is a very common operation in computer vision and pattern recognition. In order to accelerate the search for the nearest neighbour in large collection datasets, many methods rely on the coarse-fine approach. In this paper we propose to combine Product Quantization (PQ) and binary neural associative memories to perform the coarse search. Our motivation lies in the fact that neural network dimensions of the representation associated with a set of k vectors is independent of k. We run experiments on TEXMEX SIFT1M and MNIST databases and observe significant improvements in terms of complexity of the search compared to raw PQ.

Télécharger le manuscrit.

Bibtex
@inproceedings{FerGriJia201607,
  author = {Demetrio Ferro and Vincent Gripon and
Xiaoran Jiang},
  title = {Nearest Neighbour Search Using Binary
Neural Networks},
  booktitle = {Proceedings of IJCNN},
  year = {2016},
  pages = {5106--5112},
  month = {July},
}

SimNet: A new algorithm for measuring brain networks similarity

A. Mheich, M. Hassan, F. Wendling, M. Khalil, O. Dufor, V. Gripon et C. Berrou, "SimNet: A new algorithm for measuring brain networks similarity," dans Proceedings of the ICABME international conference, pp. 119--122, 2015.

Measuring similarity among graphs is recognized as a non-trivial problem. Most of the algorithms proposed so far ignore the spatial location of vertices, which is a crucial factor in the context of brain networks. In this paper, we present a novel algorithm, called “SimNet”, for measuring the similarity between two graphs whose vertices represent the position of sources over the cortex. The novelty is to account for differences at the level of spatially-registered vertices and edges. Simulated graphs are used to evaluate the algorithm performance and to compare it with methods reported elsewhere. Results show that SimNet is able to quantify the similarity between two graphs under a spatial constraint based on the 3D location of edges. The application of SimNet on real data (dense EEG) reveals the presence of spatially-different brain networks modules activating during cognitive activity.

Télécharger le manuscrit.

Bibtex
@inproceedings{MheHasWenKhaDufGriBer2015,
  author = {A. Mheich and M. Hassan and F. Wendling
and M. Khalil and O. Dufor and V. Gripon and C.
Berrou},
  title = {SimNet: A new algorithm for measuring brain
networks similarity},
  booktitle = {Proceedings of the ICABME international
conference},
  year = {2015},
  pages = {119--122},
}

Vers une caractérisation de la courbe d'incertitude pour des graphes portant des signaux

B. Pasdeloup, V. Gripon, G. Mercier et D. Pastor, "Vers une caractérisation de la courbe d'incertitude pour des graphes portant des signaux," dans Actes de la conférence GRETSI, 2015.

Le traitement de signal sur graphes est un domaine récent visant à généraliser les outils classiques du traitement de signal, afin d’analyser des signaux évoluant sur des domaines complexes. Ces domaines sont représentés par des graphes pour lesquels on peut calculer une matrice appelée Laplacien normalisé [3]. Il a été montré que les valeurs propres de ce Laplacien correspondent aux fréquences du domaine de Fourier en traitement de signal classique [2]. Ainsi, le domaine fréquentiel n’est pas identique pour tout graphe support des signaux. Une conséquence est qu’il n’y a pas de généralisation non triviale du principe d’incertitude d’Heisenberg, indiquant qu’un signal ne peut être à la fois localisé dans le domaine temporel et dans le domaine fréquentiel. Une manière de généraliser ce principe, introduite par Agaskar & Lu en [1], consiste à déterminer une courbe servant de borne inférieure au compromis entre précision dans le domaine du graphe et précision dans le domaine spectral. L’objectif de ce papier est de proposer une caractérisation des signaux atteignant cette courbe, pour une classe de graphes plus générique que celle étudiée par Agaskar & Lu.

Télécharger le manuscrit.

Bibtex
@inproceedings{PasGriMerPas2015,
  author = {Bastien Pasdeloup and Vincent Gripon and
Grégoire Mercier and Dominique Pastor},
  title = {Vers une caractérisation de la courbe
d'incertitude pour des graphes portant des signaux},
  booktitle = {Proceedings of the GRETSI conference},
  year = {2015},
}

Réseaux de Clusters de Neurones Restreints

R. Danilo, V. Gripon, P. Coussy et L. Conde-Canencia, "Réseaux de Clusters de Neurones Restreints," dans Actes de la conférence GRETSI, 2015.

Les mémoires associatives sont capables de restituer un message précédemment stocké lorsqu’ une version incomplète de celui-ci leur est présentée. Un modèle de mémoire associative basé sur des neurones et des connexions binaires, nommé Réseau de Clusters de Neurones (RCN), a été récemment introduit. Les performances de ce modèle chutent lorsque les messages stockés suivent une loi de distribution non-uniforme. L’objectif de cet article est de proposer un nouveau modèle de mémoire associative inspiré des RCNs et des machines de Boltzmann restreintes, afin de diminuer la vulnérabilité à la non-uniformité des messages. Une mise en œuvre matérielle de ce modèle est également présentée dans cet article. Cette dernière permet de multiplier le nombre de messages stockés par 3, avec une augmentation de la complexité matérielle de seulement 40%.

Télécharger le manuscrit.

Bibtex
@inproceedings{DanGriCouCon2015,
  author = {Robin Danilo and Vincent Gripon and
Philippe Coussy and Laura Conde-Canencia},
  title = {Réseaux de Clusters de Neurones
Restreints},
  booktitle = {Proceedings of the GRETSI conference},
  year = {2015},
}

Impact du bruit synaptique sur les performances des réseaux de cliques neurales

E. Coyac, V. Gripon et C. Langlais, "Impact du bruit synaptique sur les performances des réseaux de cliques neurales," dans Actes de la conférence GRETSI, 2015.

Les réseaux de neurones s’inspirent du fonctionnement et de la structure du cerveau, et l’argument de la plausibilité biologique est souvent utilisé pour défendre ou critiquer un modèle par rapport à un autre. La littérature neurobiologique nous renseigne sur le fait que le cerveau est un système bruité, dans lequel les composants et leurs connexions sont non fiables. Pourtant, le comportement des réseaux de neurones face à ce bruit a rarement été étudié. Ce papier propose un modèle de bruit cérébral et analyse son impact sur les performances des réseaux de cliques neurales. Nous montrons que le bruit peut les améliorer, notamment en évitant des minima locaux. Nous analysons également l’impact de ce bruit sur les performances des réseaux de Hopfield.

Télécharger le manuscrit.

Bibtex
@inproceedings{CoyGriLan2015,
  author = {Eliott Coyac and Vincent Gripon and
Charlotte Langlais},
  title = {Impact du bruit synaptique sur les
performances des réseaux de cliques neurales},
  booktitle = {Proceedings of the GRETSI conference},
  year = {2015},
}

Graph Reconstruction from the Observation of Diffused Signals

B. Pasdeloup, M. Rabbat, V. Gripon, D. Pastor et G. Mercier, "Graph Reconstruction from the Observation of Diffused Signals," dans Proceedings of the 53rd Allerton Conference, pp. 1386--1390, octobre 2015.

Signal processing on graphs has received a lot of attention in the recent years. A lot of techniques have arised, inspired by classical signal processing ones, to allow studying signals on any kind of graph. A common aspect of these technique is that they require a graph correctly modeling the studied support to explain the signals that are observed on it. However, in many cases, such a graph is unavailable or has no real physical existence. An example of this latter case is a set of sensors randomly thrown in a field which obviously observe related information. To study such signals, there is no intuitive choice for a support graph. In this document, we address the problem of inferring a graph structure from the observation of signals, under the assumption that they were issued of the diffusion of initially i.i.d. signals. To validate our approach, we design an experimental protocol, in which we diffuse signals on a known graph. Then, we forget the graph, and show that we are able to retrieve it very precisely from the only knowledge of the diffused signals.

Télécharger le manuscrit.

Bibtex
@inproceedings{PasRabGriPasMer201510,
  author = {Bastien Pasdeloup and Michael Rabbat and
Vincent Gripon and Dominique Pastor and Gregoire
Mercier},
  title = {Graph Reconstruction from the Observation
of Diffused Signals},
  booktitle = {Proceedings of the 53rd Allerton
Conference},
  year = {2015},
  pages = {1386--1390},
  month = {October},
}

Toward an uncertainty principle for weighted graphs

B. Pasdeloup, R. Alami, V. Gripon et M. Rabbat, "Toward an uncertainty principle for weighted graphs," dans Proceedings of the 23rd European Signal Processing Conference, pp. 1496--1500, juillet 2015.

The uncertainty principle states that a signal cannot be localized both in time and frequency. With the aim of extending this result to signals on graphs, Agaskar & Lu introduce notions of graph and spectral spreads. They show that a graph uncertainty principle holds for some families of unweighted graphs. This principle states that a signal cannot be simultaneously localized both in graph and spectral domains. In this paper, we aim to extend their work to weighted graphs. We show that a naive extension of their definitions leads to inconsistent results such as discontinuity of the graph spread when regarded as a function of the graph structure. To circumvent this problem, we propose another definition of graph spread that relies on an inverse similarity matrix. We also discuss the choice of the distance function that appears in this definition. Finally, we compute and plot uncertainty curves for families of weighted graphs.


Bibtex
@inproceedings{PasAlaGriRab201507,
  author = {Bastien Pasdeloup and Réda Alami and
Vincent Gripon and Michael Rabbat},
  title = {Toward an uncertainty principle for
weighted graphs},
  booktitle = {Proceedings of the 23rd European Signal
Processing Conference},
  year = {2015},
  pages = {1496--1500},
  month = {July},
}

Algorithm and Implementation of an Associative Memory for Oriented Edge Detection Using Improved Clustered Neural Networks

R. Danilo, H. Jarollahi, V. Gripon, P. Coussy, L. Conde-Canencia et W. J. Gross, "Algorithm and Implementation of an Associative Memory for Oriented Edge Detection Using Improved Clustered Neural Networks," dans Proceedings of ISCAS Conference, pp. 2501--2504, mai 2015.

Associative memories are capable of retrieving previously stored patterns given parts of them. This feature makes them good candidates for pattern detection in images. Clustered Neural Networks is a recently-introduced family of associative memories that allows a fast pattern retrieval when implemented in hardware. In this paper, we propose a new pattern retrieval algorithm that results in a dramatically lower error rate compared to that of the conventional approach when used in oriented edge detection process. This function plays an important role in image processing. Furthermore, we present the corresponding hardware architecture and implementation of the new approach in comparison with a conventional architecture in literature, and show that the proposed architecture does not significantly affect hardware complexity.

Télécharger le manuscrit.

Bibtex
@inproceedings{DanJarGriCouConGro20155,
  author = {Robin Danilo and Homman Jarollahi and
Vincent Gripon and Philippe Coussy and Laura
Conde-Canencia and Warren J. Gross},
  title = {Algorithm and Implementation of an
Associative Memory for Oriented Edge Detection Using
Improved Clustered Neural Networks},
  booktitle = {Proceedings of ISCAS Conference},
  year = {2015},
  pages = {2501--2504},
  month = {May},
}

Restricted Clustered Neural Network for Storing Real Data

R. Danilo, V. Gripon, P. Coussy, L. Conde-Canencia et W. J. Gross, "Restricted Clustered Neural Network for Storing Real Data," dans proceedings of GLSVLSI conference, pp. 205--210, mai 2015.

Associative memories are an alternative to classical indexed memories that are capable of retrieving a message previously stored when an incomplete version of this message is presented. Recently a new model of associative memory based on binary neurons and binary links has been proposed. This model named Clustered Neural Network (CNN) offers large storage diversity (number of messages stored) and fast message retrieval when implemented in hardware. The performance of this model drops when the stored message distribution is non-uniform. In this paper, we enhance the CNN model to support non-uniform message distribution by adding features of Restricted Boltzmann Machines. In addition, we present a fully parallel hardware design of the model. The proposed implementation multiplies the performance (diversity) of Clustered Neural Networks by a factor of 3 with an increase of complexity of 40%.

Télécharger le manuscrit.

Bibtex
@inproceedings{DanGriCouConGro20155,
  author = {Robin Danilo and Vincent Gripon and
Philippe Coussy and Laura Conde-Canencia and Warren J.
Gross},
  title = {Restricted Clustered Neural Network for
Storing Real Data},
  booktitle = {proceedings of GLSVLSI conference},
  year = {2015},
  pages = {205--210},
  month = {May},
}

A novel algorithm for measuring graph similarity: application to brain networks

A. Mheich, M. Hassan, V. Gripon, O. Dufor, M. Khalil, C. Berrou et F. Wendling, "A novel algorithm for measuring graph similarity: application to brain networks," dans Proceedings of the IEEE EMBS Neural Engineering Conference, pp. 1068--1071, avril 2015.

Measuring the similarity among graphs is a challenging issue in many disciplines including neuroscience. Several algorithms, mainly based on vertices or edges properties, were proposed to address this issue. Most of them ignore the physical location of the vertices, which is a crucial factor in the analysis of brain networks. Indeed, functional brain networks are usually represented as graphs composed of vertices (brain regions) connected by edges (functional connectivity). In this paper, we propose a novel algorithm to measure a similarity between graphs. The novelty of our approach is to account for vertices, edges and spatiality at the same time. The proposed algorithm is evaluated using synthetic graphs. It shows high ability to detect and measure similarity between graphs. An application to real functional brain networks is then described. The algorithm allows for quantification of the inter-subjects variability during a picture naming task.

Télécharger le manuscrit.

Bibtex
@inproceedings{MheHasGriDufKhaBerWen20154,
  author = {A. Mheich and M. Hassan and V. Gripon and
O. Dufor and M. Khalil and C. Berrou and F. Wendling},
  title = {A novel algorithm for measuring graph
similarity: application to brain networks},
  booktitle = {Proceedings of the IEEE EMBS Neural
Engineering Conference},
  year = {2015},
  pages = {1068--1071},
  month = {April},
}

A Model of Bottom-Up Visual Attention Using Cortical Magnification

A. Aboudib, V. Gripon et G. Coppin, "A Model of Bottom-Up Visual Attention Using Cortical Magnification," dans Proceedings of ICASSP, pp. 1493--1497, avril 2015.

The focus of visual attention has been argued to play a key role in object recognition. Many computational models of visual attention were proposed to estimate locations of eye fixations driven by bottom-up stimuli. Most of these models rely on pyramids consisting of multiple scaled versions of the visual scene. This design aims at capturing the fact that neural cells in higher visual areas tend to have larger receptive fields (RFs). On the other hand, very few models represent multi-scaling resulting from the eccentricity-dependent RF sizes within each visual layer, also known as the cortical magnification effect. In this paper, we demonstrate that using a cortical-magnification-like mechanism can lead to performant alternatives to pyramidal approaches in the context of attentional modeling. Moreover, we argue that introducing such a mechanism equips the proposed model with additional properties related to overt attention and distance-dependent saliency that are worth exploring.

Télécharger le manuscrit.

Bibtex
@inproceedings{AboGriCop20154,
  author = {Ala Aboudib and Vincent Gripon and Gilles
Coppin},
  title = {A Model of Bottom-Up Visual Attention Using
Cortical Magnification},
  booktitle = {Proceedings of ICASSP},
  year = {2015},
  pages = {1493--1497},
  month = {April},
}

Automatic face recognition using SIFT and networks of tagged neural cliques

E. S. Gooya, D. Pastor et V. Gripon, "Automatic face recognition using SIFT and networks of tagged neural cliques," dans Proceedings of Cognitive, pp. 57--61, mars 2015.

Bearing information by a fully interconnected subgraphs is recently improved in the neural network of cliques. In this paper, a face recognition system is presented using such networks where local descriptors are used to perform feature extraction. In the wide range of possible image descriptors for face recognition, we focus specifically on the Scale Invariant Feature Transform (SIFT). In contrast to standard methods, our proposed method requires no empirically chosen threshold. Moreover, it performs matching between sets of features, in addition to individual feature matching. Thus, we favor joint occurrences of descriptors during the recognition process. We compare our approach to state of the art face recognition systems based on SIFT descriptors. The evaluation is carried out on the Olivetti and Oracle Research Laboratory (ORL) face database, whose diversity is significant for assessing face recognition methods.

Prix du meilleur papier

Télécharger le manuscrit.

Bibtex
@inproceedings{GooPasGri20153,
  author = {Ehsan Sedgh Gooya and Dominique Pastor and
Vincent Gripon},
  title = {Automatic face recognition using SIFT and
networks of tagged neural cliques},
  booktitle = {Proceedings of Cognitive},
  year = {2015},
  pages = {57--61},
  month = {March},
}

Using Tags to Improve Diversity of Sparse Associative Memories

S. Larroque, E. S. Gooya, V. Gripon et D. Pastor, "Using Tags to Improve Diversity of Sparse Associative Memories," dans Proceedings of Cognitive, pp. 1--7, mars 2015.

Associative memories, a classical model for brain long-term memory, face interferences between old and new memories. Usually, the only remedy is to enlarge the network so as to retain more memories without collisions: this is the network’s size–diversity trade-off. We propose a novel way of representing data in these networks to provide another mean to extend diversity without resizing the network. We show from our analysis and simulations that this method is a viable alternative, which can perfectly fit cases where network’s size is constrained such as neuromorphic FPGA boards implementing associative memories.

Télécharger le manuscrit.

Bibtex
@inproceedings{LarGooGriPas20153,
  author = {Stephen Larroque and Ehsan Sedgh Gooya and
Vincent Gripon and Dominique Pastor},
  title = {Using Tags to Improve Diversity of Sparse
Associative Memories},
  booktitle = {Proceedings of Cognitive},
  year = {2015},
  pages = {1--7},
  month = {March},
}

Neural Associative Memories as Accelerators for Binary Vector Search

C. Yu, V. Gripon, X. Jiang et H. Jégou, "Neural Associative Memories as Accelerators for Binary Vector Search," dans Proceedings of Cognitive, pp. 85--89, mars 2015.

Associative memories aim at matching an input noisy vector with a stored one. The matched vector satisfies a minimum distance criterion with respect to the inner metric of the device. This problem of finding nearest neighbors in terms of Euclidean or Hamming distances is a very common operation in machine learning and pattern recognition. However, the inner metrics of associative memories are often misfitted to handle practical scenarios. In this paper, we adapt Willshaw networks in order to use them for accelerating nearest neighbor search with limited impact on accuracy. We provide a theoretical analysis of our method for binary sparse vectors. We also test our method using the MNIST handwritten digits database. Both our analysis for synthetic data and experiments with real-data evidence a significant gain in complexity with negligible loss in performance compared to exhaustive search.

Télécharger le manuscrit.

Bibtex
@inproceedings{YuGriJiaJé20153,
  author = {Chendi Yu and Vincent Gripon and Xiaoran
Jiang and Hervé Jégou},
  title = {Neural Associative Memories as Accelerators
for Binary Vector Search},
  booktitle = {Proceedings of Cognitive},
  year = {2015},
  pages = {85--89},
  month = {March},
}

Implementing Relational-Algebraic Operators for Improving Cognitive Abilities in Networks of Neural Cliques

A. Aboudib, V. Gripon et B. Tessiau, "Implementing Relational-Algebraic Operators for Improving Cognitive Abilities in Networks of Neural Cliques," dans Proceedings of Cognitive, pp. 36--41, mars 2015.

Associative memories are devices capable of retrieving previously stored messages from parts of their content. They are used in a variety of applications including CPU caches, routers, intrusion detection systems, etc. They are also considered a good model for human memory, motivating the use of neural-based techniques. When it comes to cognition, it is important to provide such devices with the ability to perform complex requests, such as union, intersection, difference, projection and selection. In this paper, we extend a recently introduced associative memory model to perform relational algebra operations. We introduce new algorithms and discuss their performance which provides an insight on how the brain performs some high-level information processing tasks.

Télécharger le manuscrit.

Bibtex
@inproceedings{AboGriTes20153,
  author = {Ala Aboudib and Vincent Gripon and
Baptiste Tessiau},
  title = {Implementing Relational-Algebraic Operators
for Improving Cognitive Abilities in Networks of
Neural Cliques},
  booktitle = {Proceedings of Cognitive},
  year = {2015},
  pages = {36--41},
  month = {March},
}

Algorithm and Architecture for a Multiple-Field Context-Driven Search Engine Using Fully-Parallel Clustered Associative Memories

H. Jarollahi, N. Onizawa, V. Gripon, T. Hanyu et W. J. Gross, "Algorithm and Architecture for a Multiple-Field Context-Driven Search Engine Using Fully-Parallel Clustered Associative Memories," dans Proceedings of SiPS, pp. 1--6, octobre 2014.

In this paper, a context-driven search engine is presented based on a new family of associative memories. It stores only the associations between items from multiple search fields in the form of binary links, and merges repeated field items to reduce the memory requirements. It achieves 13.6× reduction in memory bits and accesses, and 8.6× reduced number of clock cycles in search operation compared to a classical field-based search structure using content-addressable memory. Furthermore, using parallel computational nodes in the proposed search engine, it achieves five orders of magnitude reduced number of clock cycles compared to a CPU-based counterpart running a classical search algorithm in software.

Télécharger le manuscrit.

Bibtex
@inproceedings{JarOniGriHanGro201410,
  author = {Hooman Jarollahi and Naoya Onizawa and
Vincent Gripon and Takahiro Hanyu and Warren J.
Gross},
  title = {Algorithm and Architecture for a
Multiple-Field Context-Driven Search Engine Using
Fully-Parallel Clustered Associative Memories},
  booktitle = {Proceedings of SiPS},
  year = {2014},
  pages = {1--6},
  month = {October},
}

Sparse Binary Matrices as Efficient Associative Memories

V. Gripon, V. Skachek et M. Rabbat, "Sparse Binary Matrices as Efficient Associative Memories," dans Proceedings of the 52nd Allerton conference, pp. 499--504, octobre 2014.

Associative memories are widely used devices which can be viewed as universal error-correcting decoders. Employing error-correcting code principles in these devices has allowed to greatly enhance their performance. In this paper we reintroduce a neural-based model using the formalism of linear algebra and extend its functionality, originally limited to erasure retrieval, to handle approximate inputs. In order to perform the retrieval, we use an iterative algorithm that provably converges. We then analyze the performance of the associative memory under the assumption of connection independence. We supportour theoretical results with numerical simulations.

Télécharger le manuscrit.

Bibtex
@inproceedings{GriSkaRab201410,
  author = {Vincent Gripon and Vitaly Skachek and
Michael Rabbat},
  title = {Sparse Binary Matrices as Efficient
Associative Memories},
  booktitle = {Proceedings of the 52nd Allerton
conference},
  year = {2014},
  pages = {499--504},
  month = {October},
}

Information, Noise, Coding, Modulation: What about the Brain?

C. Berrou, O. Dufor, V. Gripon et X. Jiang, "Information, Noise, Coding, Modulation: What about the Brain?," dans Proceedings of the 8th symposium on Turbo Codes and Iterative Information Processing, pp. 167--172, août 2014.

At the microscopic level, the brain is fundamentally a matter of physics and chemistry, as all the components of the universe are. At the macroscopic scale, behavior, psychology and affects are the main dimensions of its life. To convert atoms and molecules into intelligence, some kind of information has to be fixed in the grey matter of the cerebral cortex. The way this "mental information" is materialized and processed is still an enigma, probably the most puzzling problem addressed to science nowadays. At this mesoscopic level of the brain functioning, the concepts to consider are likely the same as those considered in communication and information theory, mainly information, noise, coding and modulation. This paper proposes some ideas that could help understand some features of the brain in an information-processing perspective.

Télécharger le manuscrit.

Bibtex
@inproceedings{BerDufGriJia20148,
  author = {Claude Berrou and Olivier Dufor and
Vincent Gripon and Xiaoran Jiang},
  title = {Information, Noise, Coding, Modulation:
What about the Brain?},
  booktitle = {Proceedings of the 8th symposium on
Turbo Codes and Iterative Information Processing},
  year = {2014},
  pages = {167--172},
  month = {August},
}

Huffman Coding for Storing Non-uniformly Distributed Messages in Networks of Neural Cliques

B. Boguslawski, V. Gripon, F. Seguin et F. Heitzmann, "Huffman Coding for Storing Non-uniformly Distributed Messages in Networks of Neural Cliques," dans proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, volume 1, pp. 262--268, juillet 2014.

A new family of sparse neural networks achieving nearly optimal performance has been recently introduced. In these networks, messages are stored as cliques in clustered graphs. In this paper, we interpret these networks using the formalism of error correcting codes. To achieve this, we introduce two original codes, the thrifty code and the clique code, that are both sub-families of binary constant weight codes. We also provide the networks with an enhanced retrieving rule that enables a property of answer correctness and that improves performance.

Télécharger le manuscrit.

Bibtex
@inproceedings{BogGriSegHei20147,
  author = {Bartosz Boguslawski and Vincent Gripon and
Fabrice Seguin and Frédéric Heitzmann},
  title = {Huffman Coding for Storing Non-uniformly
Distributed Messages in Networks of Neural Cliques},
  booktitle = {proceedings of the Twenty-Eighth AAAI
Conference on Artificial Intelligence, volume 1},
  year = {2014},
  pages = {262--268},
  month = {July},
}

A GPU-based Associative Memory using Sparse Neural Networks

Z. Yao, V. Gripon et M. Rabbat, "A GPU-based Associative Memory using Sparse Neural Networks," dans Proceedings of the PCNN-14 conference, pp. 688--692, juillet 2014.

Associative memories, serving as building blocks for a variety of algorithms, store content in such a way that it can be later retrieved by probing the memory with a small portion of it, rather than with an address as in more traditional memories. Recently, Gripon and Berrou have introduced a novel construction which builds on ideas from the theory of error correcting codes, greatly outperforming the celebrated Hopfield Neural Networks in terms of the number of stored messages per neuron and the number of stored bits per synapse. The work of Gripon and Berrou proposes two retrieval rules, SUM-OF-SUM and SUM-OF-MAX. In this paper, we implement both rules on a general purpose graphical processing unit (GPU). SUM-OF-SUMuses only matrix-vector multiplication and is easily implemented on the GPU, whereas SUM-OF-MAX, which involves non-linear operations, is much less straightforward to fulfill. However, SUM-OF-MAX gives significantly better retrieval error rates. We propose a hybrid scheme tailored for implementation on a GPU which achieves a 880-fold speedup without sacrificing any accuracy.

Télécharger le manuscrit.

Bibtex
@inproceedings{YaoGriRab20147,
  author = {Zhe Yao and Vincent Gripon and Michael
Rabbat},
  title = {A GPU-based Associative Memory using Sparse
Neural Networks},
  booktitle = {Proceedings of the PCNN-14 conference},
  year = {2014},
  pages = {688--692},
  month = {July},
}

Cluster-based Associative Memories Built From Unreliable Storage

F. Leduc-Primeau, V. Gripon, M. Rabbat et W. Gross, "Cluster-based Associative Memories Built From Unreliable Storage," dans ICASSP, pp. 8370--8374, mai 2014.

We consider associative memories based on clustered graphs that were recently introduced. These memories are almost optimal in terms of the amount of storage they require (efficiency), and allow retrieving messages with low complexity. We study an unreliable implementation of the memory and compare its error rate and storage efficiency with that of a reliable implementation. We present analytical and simulation results that indicate that the proposed memory structure can tolerate a large number of faults at a reasonable cost, thereby making it a good candidate for achieving highly efficient circuit implementations of associative memories.

Télécharger le manuscrit.

Bibtex
@inproceedings{LedGriRabGro20145,
  author = {François Leduc-Primeau and Vincent Gripon
and Michael Rabbat and Warren Gross},
  title = {Cluster-based Associative Memories Built
From Unreliable Storage},
  booktitle = {ICASSP},
  year = {2014},
  pages = {8370--8374},
  month = {May},
}

Towards a Spectral Characterization of Signals Supported on Small-World Networks

M. Rabbat et V. Gripon, "Towards a Spectral Characterization of Signals Supported on Small-World Networks," dans ICASSP, pp. 4793--4797, mai 2014.

We study properties of the family of small-world random graphs introduced in Watts & Strogatz (1998), focusing on the spectrum of the normalized graph Laplacian. This spectrum influences the extent to which a signal supported on the vertices of the graph can be simultaneously localized on the graph and in the spectral domain (the surrogate of the frequency domain for signals supported on a graph). This characterization has implications for inferring or interpolating functions supported on such graphs when observations are only available at a subset of nodes.

Télécharger le manuscrit.

Bibtex
@inproceedings{RabGri20145,
  author = {Michael Rabbat and Vincent Gripon},
  title = {Towards a Spectral Characterization of
Signals Supported on Small-World Networks},
  booktitle = {ICASSP},
  year = {2014},
  pages = {4793--4797},
  month = {May},
}

A study of retrieval algorithms of sparse messages in networks of neural cliques

A. Aboudib, V. Gripon et X. Jiang, "A study of retrieval algorithms of sparse messages in networks of neural cliques," dans Proceedings of Cognitive 2014, pp. 140--146, mai 2014.

Associative memories are data structures addressed using part of the content rather than an index. They offer good fault reliability and biological plausibility. Among different families of associative memories, sparse ones are known to offer the best efficiency (ratio of the amount of bits stored to that of bits used by the network itself). Their retrieval process performance has been shown to benefit from the use of iterations. In this paper, we introduce several algorithms to enhance the performance of the retrieval process in recently proposed sparse associative memories based on binary neural networks. We show that these algorithms provide better performance than existing techniques and discuss their biological plausibility. We also analyze the required number of iterations and derive corresponding curves.

Télécharger le manuscrit.

Bibtex
@inproceedings{AboGriJia20145,
  author = {Ala Aboudib and Vincent Gripon and Xiaoran
Jiang},
  title = {A study of retrieval algorithms of sparse
messages in networks of neural cliques},
  booktitle = {Proceedings of Cognitive 2014},
  year = {2014},
  pages = {140--146},
  month = {May},
}

Sparse Structured Associative Memories as Efficient Set-Membership Data Structures

V. Gripon, V. Skachek et M. G. Rabbat, "Sparse Structured Associative Memories as Efficient Set-Membership Data Structures," dans Actes de la 51ème conférence Allerton, pp. 500--505, octobre 2013.

We study the use of sparse structured associative memories as a memory-efficient and computationally-efficient data structure for representing a set of elements when one wishes to perform set-membership queries and some errors (false positives) are tolerable. Associative memories, when viewed as representing a set, enjoy a number of interesting properties, including that set membership queries can be carried out even when the input (query element) is only partially known or is partially corrupted. The associative memories considered here (initially proposed in [Gripon and Berrou, 2011]) encode the set in the edge structure of a graph. In this paper we generalize this construction to encode the set in the edge structure of a hypergraph. We derive bounds on the false positive rates (the probability that the associative memory erroneously declares that an element is in the stored set when it, in fact, was not). Interestingly, the proposed structures enjoy many of the same properties as Bloom filters (e.g., they have zero false negative rate, the time to perform an insert and lookup does not depend on the number of elements stored, and the false positive rate can be reduced by using additional memory for storage), while also offering the properties of associative memories (allowing for queries on partial or corrupted inputs).

Télécharger le manuscrit.

Bibtex
@inproceedings{GriSkaRab201310,
  author = {Vincent Gripon and Vitaly Skachek and
Michael G. Rabbat},
  title = {Sparse Structured Associative Memories as
Efficient Set-Membership Data Structures},
  booktitle = {Proceedings of the 51st Allerton
conference},
  year = {2013},
  pages = {500--505},
  month = {October},
}

Maximum Likelihood Associative Memories

V. Gripon et M. Rabbat, "Maximum Likelihood Associative Memories," dans Proceedings of Information Theory Workshop, pp. 1--5, septembre 2013.

Les mémoires associatives sont des structures qui permettent de stocker des données de façon à pouvoir ensuite les retrouver à partir d'une fraction de leur contenu -- une sorte de robustesse aux erreurs/effacements. Elles sont utilisées dans des applications allant des caches et gestions de mémoires des processeurs aux moteurs de bases de données. Dans ce papier nous étudions des mémoires associatives construites selon le principe du maximum de vraissemblance. Nous introduisons le taux d'erreur résiduel minimal quand les données sont obtenues à partir d'une source binaire uniforme. Ensuite nous déterminons la quantité minimale de mémoire requise pour stocker ces mêmes données. Enfin nous exprimons une borne sur la complexité algorithmique de la remémoration de message. Nous comparons ces différents résultats avec deux mémoires associatives existantes : le célèbre modèle de Hopfield et une architecture récemment introduite par Gripon et Berrou.

Télécharger le manuscrit.

Bibtex
@inproceedings{GriRab20139,
  author = {Vincent Gripon and Michael Rabbat},
  title = {Maximum Likelihood Associative Memories},
  booktitle = {Proceedings of Information Theory
Workshop},
  year = {2013},
  pages = {1--5},
  month = {September},
}

Mémoires associatives pour observations floues

V. Gripon et X. Jiang, "Mémoires associatives pour observations floues," dans Actes du XXIVème colloque Gretsi, septembre 2013.

Nous présentons une extension de fonctionnement de mémoires associatives récemment introduites au cas d'entrées approximatives. Nous analysons les performances et montrons qu'à taux d'erreur fixé, la consommation mémoire est à un facteur constant de l'optimal et la complexité algorithmique reste limitée (quadratique). Ces extensions ouvrent la voie à l'utilisation de mémoires associatives pour effectuer des recherches complexes dans des bases de données avec un temps de réponse faible.

Télécharger le manuscrit.

Bibtex
@inproceedings{GriJia20139,
  author = {Vincent Gripon and Xiaoran Jiang},
  title = {Mémoires associatives pour observations
floues},
  booktitle = {Proceedings of XXIV-th Gretsi seminar},
  year = {2013},
  month = {September},
}

Reconstructing a Graph from Path Traces

V. Gripon et M. Rabbat, "Reconstructing a Graph from Path Traces," dans Proceedings of International Symposium on Information Theory, pp. 2488--2492, juillet 2013.

Ce document s'intéresse au problème d'inférer la structure d'un réseau à partir d'observations indirectes. Chaque observation (une "trace") est un ensemble non ordonné de noeuds qui s'activent le long d'un chemin du réseau. Puisqu'une trace ne contient aucune information à propos de l'ordre des noeuds dans le chemin, il y a beaucoup d'ordres possibles pour chaque trace, et donc le problème d'inférence d'un réseau à partir de traces est, en général, mal posé. Nous proposons et analysons un algorithme qui insère des arêtes en ordonnant chaque trace afin d'en faire un chemin en fonction des paires de noeuds dans le chemin qui apparaissent le plus fréquemment dans l'ensemble des observations. Lorsque toutes les traces impliquent exactement trois noeuds, nous analysons les conditions nécessaires et suffisantes pour que l'algorithme de reconstruction retrouve exactement le graphe initial. Enfin, pour une famille de graphes aléatoires, nous introduisons les probabilités d'erreur de reconstruction (arêtes faussement ajoutées et manquées).

Télécharger le manuscrit.

Bibtex
@inproceedings{GriRab20137,
  author = {Vincent Gripon and Michael Rabbat},
  title = {Reconstructing a Graph from Path Traces},
  booktitle = {Proceedings of International Symposium
on Information Theory},
  year = {2013},
  pages = {2488--2492},
  month = {July},
}

A Low-Power Content-Adressable-Memory Based on Clustered-Sparse-Networks

H. Jarollahi, V. Gripon, N. Onizawa et W. J. Gross, "A Low-Power Content-Adressable-Memory Based on Clustered-Sparse-Networks," dans Proceedings of 24th International Conference on Application-specific Systems, Architectures and Processors, pp. 642--653, juin 2013.

Une Content-Adressable-Memory (CAM) utilisant un mécanisme nouveau pour associer les données d'entrée aux adresses correspondantes est introduite. L'architecture proposée s'appuie sur des réseaux de neurones binaires compartimentés récemment introduits qui éliminent en moyenne la plupart des comparaisons requises pendant la recherche. De fait, la consommation énergétique du modèle proposé est bien plus faible que celle d'une CAM conventionnelle. Étant donné une entrée, l'arichitecture identifie une poignée de lignes possibles dans une CAM associée et ne réalise les comparaisons que sur celles-ci. En utilisant une technologie CMOS 0.13 um, la consommation énergétique et le temps de réponse sont respectivement 9.5% et 30.4% de ceux d'une architecture NAND classique, pour un surcoût en nombre de transistors de 3.4%.

Télécharger le manuscrit.

Bibtex
@inproceedings{JarGriOniGro20136,
  author = {Hooman Jarollahi and Vincent Gripon and
Naoya Onizawa and Warren J. Gross},
  title = {A Low-Power Content-Adressable-Memory Based
on Clustered-Sparse-Networks},
  booktitle = {Proceedings of 24th International
Conference on Application-specific Systems,
Architectures and Processors},
  year = {2013},
  pages = {642--653},
  month = {June},
}

Reduced-complexity binary-weight-coded associative memories

H. Jarollahi, N. Onizawa, V. Gripon et W. J. Gross, "Reduced-complexity binary-weight-coded associative memories," dans Proceedings of International Conference on Acoustics, Speech, and Signal Processing, pp. 2523--2527, mai 2013.

Les mémoires associatives retrouvent de l'information précédemment stockée étant donné des entrées partielles ou erronées. Récemment, une nouvelle famille de mémoires associatives s'appuyant sur des réseaux de neurones compartimentés a été introduite. Celle-ci peut stocker beaucoup plus de messages que les réseaux de Hopfield classiques. Dans ce document, nous proposons une architecture matérielle de ces mémoires pour des entrées partielles ou erronées. Notre architecture élimine les modules de gagnant-prend-tout et réduit donc la complexité matérielle en consommant 65% des tables FPGA en moins et augmente la fréquence de travail par un facteur d'environs 1.9 par rapport aux travaux précédents.

Télécharger le manuscrit.

Bibtex
@inproceedings{JarOniGriGro20135,
  author = {Hooman Jarollahi and Naoya Onizawa and
Vincent Gripon and Warren J. Gross},
  title = {Reduced-complexity binary-weight-coded
associative memories},
  booktitle = {Proceedings of International Conference
on Acoustics, Speech, and Signal Processing},
  year = {2013},
  pages = {2523--2527},
  month = {May},
}

Compressing multisets using tries

V. Gripon, M. Rabbat, V. Skachek et W. J. Gross, "Compressing multisets using tries," dans Proceedings of Information Theory Workshop, Lausanne, Suisse, pp. 647--651, septembre 2012.

Nous considérons le problème de représenter sans perte et efficacement des multiensembles de m éléments tirés avec répétitions dans un ensemble de taille 2^n. On est en droit de penser que l'encodage d'un tel multiensemble devrait apporter un grand gain par rapport à celui de la séquence des tirages correspondante, puisque l'information correspondant à l'ordre des tirages dans la séquence représente une permutation. Nous proposons et analysons un encodeur/décodeur basé sur des arbres préfixes. La phase d'encodage a une complexité en O(m(n+log m)) tandis que la phase de décodage a une complexité en O(mn). Nous nous intéressons plus particulièrement au cas où le nombre d'éléments m est 2^n/c, pour c > 1 et n → ∞. Sous ces conditions, et si les éléments du multiensemble sont tirés avec équiprobabilité, nous montrons que l'encodage proposé permet un gain non borné. De plus, l'espérance de la longueur du multiensemble encodé est asymptotiquement à un facteur 5/3 de la borne inférieure.

Télécharger le manuscrit.
Télécharger le support de présentation.

Bibtex
@inproceedings{GriRabSkaGro20129,
  author = {Vincent Gripon and Michael Rabbat and
Vitaly Skachek and Warren J. Gross},
  title = {Compressing multisets using tries},
  booktitle = {Proceedings of Information Theory
Workshop},
  year = {2012},
  address = {Lausanne, Switzerland},
  month = {September},
  pages = {647--651},
}

Random clique codes

V. Gripon, V. Skachek, W. J. Gross et M. Rabbat, "Random clique codes," dans Proceedings of 7" International Symposium on Turbo Codes and Iterative Information Processing, Gothenburg, Suède, pp. 121--125, août 2012.

Une nouvelle famille de mémoires associatives basées sur des réseaux de neurones a récemment été introduite. Ces mémoires atteignent des performances excellentes grâce à l'utilisation de principes des codes correcteur d'erreurs. Partant de ces mémoires, nous introduisons une nouvelle famille de codes appelés "codes à cliques". Ces codes s'appuient sur les cliques dans des graphes c-parti équilibrés correspondant à des mémoires associatives. En particulier, nous étudions un ensemble de code à cliques aléatoires, et nous prouvons que ces ensembles contiennent asymptotiquement des bons codes. De plus, ces codes peuvent être décodés avec efficacité en utilisant les mémoires associatives (faible complexité et occupation mémoire). Ils offrent une alternative intéressante aux codes déjà existants, en particulier lorsque le canal considéré est à effacement et sans mémoire.

Télécharger le manuscrit.
Télécharger le support de présentation.

Bibtex
@inproceedings{GriSkaGroRab20128,
  author = {Vincent Gripon and Vitaly Skachek and
Warren J. Gross and Michael Rabbat},
  title = {Random clique codes},
  booktitle = {Proceedings of 7" International
Symposium on Turbo Codes and Iterative Information
Processing},
  year = {2012},
  address = {Gothenburg, Sweden},
  month = {August},
  pages = {121--125},
}

Learning long sequences in binary neural networks

X. Jiang, V. Gripon et C. Berrou, "Learning long sequences in binary neural networks," dans Proceedings of Cognitive 2012, Nice, France, pp. 165--170, juillet 2012.

Une architecture originale de réseaux de neurones parcimonieux et orientés qui permet l'introduction de séquentialité dans les mémoires associatives est proposée dans ce manuscrit. Cette architecture peut être considérée comme une généralisation d'une famille de réseaux de neurones récemment introduite, dont le fonctionnement se base sur la présence de cliques dans les graphes correspondants. Cette nouvelle architecture est capable d'apprendre puis de retrouver des séquences très longues à partir de la connaissance d'une partie consécutive de ses symboles.

Télécharger le manuscrit.

Bibtex
@inproceedings{JiaGriBer20127,
  author = {Xiaoran Jiang and Vincent Gripon and
Claude Berrou},
  title = {Learning long sequences in binary neural
networks},
  booktitle = {Proceedings of Cognitive 2012},
  year = {2012},
  address = {Nice, France},
  pages = {165--170},
  month = {July},
}

Architecture and Implementation of an Associative Memory Using Sparse Clustered Networks

H. Jarollahi, N. Onizawa, V. Gripon et W. J. Gross, "Architecture and Implementation of an Associative Memory Using Sparse Clustered Networks," dans Proceedings of IEEE International Symposium on Circuits and Systems, pp. 2901--2904, mai 2012.

Les mémoires associatives sont une alternative aux mémoires indexées dont l'implémentation matérielle est bénéfique à certaines applications, par exemple la fouille de données. Les méthodes classiques à base de réseaux de neurones sont inadaptées puisque la quantité de bits stockés par bit utilisé (efficacité) tend vers zéro lorsque la taille de la mémoire tend vers l'infini. De plus, la taille des messages appris est égale au nombre de noeuds dans le réseau, limitant considérablement le nombre de messages (diversité) qu'une telle mémoire peut apprendre. Récemment, un nouvel algorithme se basant sur des réseaux de neurones parcimonieux atteignant une efficacité quasi optimale et une grande diversité ont été introduits. Dans ce papier, une implémentation matérielle témoin de ces réseaux est présentée. Les limitations et ouvertures sont également analysées.

Télécharger le manuscrit.
Télécharger le support de présentation.

Bibtex
@inproceedings{JarOniGriGro20125,
  author = {Hooman Jarollahi and Naoya Onizawa and
Vincent Gripon and Warren J. Gross},
  title = {Architecture and Implementation of an
Associative Memory Using Sparse Clustered Networks},
  booktitle = {Proceedings of IEEE International
Symposium on Circuits and Systems},
  year = {2012},
  pages = {2901--2904},
  month = {May},
}

Nearly-optimal associative memories based on distributed constant weight codes

V. Gripon et C. Berrou, "Nearly-optimal associative memories based on distributed constant weight codes," dans Proceedings of Information Theory and Applications Workshop, San Diego, CA, USA, pp. 269--273, février 2012.

Une nouvelle famille de réseaux de neurones atteignant des performances quasi optimales a récemment été introduite. Dans ces réseaux, les messages sont stockés sous la forme de cliques dans des graphes multiparties. Dans ce manuscrit, nous interprétons ces réseaux sous le formalisme des codes correcteur d'erreurs. Pour cela, nous introduisons deux codes originaux, le code économe et le code sur cliques, qui sont tous les deux des sous-familles des codes de poids constant. Nous munissons également le réseau d'une règle de décodage améliorée qui donne au réseau des propriétés d'exactitude des réponses et qui augmente sensiblement les performances.

Merci à Pascal Vontobel pour m'avoir fait remarqué plusieurs erreurs (cette version est à jour).

Télécharger le manuscrit.
Télécharger le support de présentation.

Bibtex
@inproceedings{GriBer20122,
  author = {Vincent Gripon and Claude Berrou},
  title = {Nearly-optimal associative memories based
on distributed constant weight codes},
  booktitle = {Proceedings of Information Theory and
Applications Workshop},
  year = {2012},
  address = {San Diego, CA, USA},
  pages = {269--273},
  month = {February},
}

A simple and efficient way to store many messages using neural cliques

V. Gripon et C. Berrou, "A simple and efficient way to store many messages using neural cliques," dans Proceedings of IEEE Symposium on Computational Intelligence, Cognitive Algorithms, Mind, and Brain, Paris, France, pp. 54--58, avril 2011.

Les mémoires associatives sont des dispositifs capables d’apprendre puis de se remémorer des messages malgré des effacements ou des erreurs. Leur mécanique est similaire à celle des décodeurs de codes correcteur d’erreurs. D’un autre coté, la corrélation a un rôle dual dans chacun de ces dispositifs, utilisée comme pierre angulaire de la remémoration dans les mémoires associatives et évitée dans les décodeurs. Dans ce papier sont introduits des codes originaux qui permettent un rapprochement des deux dynamiques. L’idée de base est d’associer à chaque message à apprendre une clique dans un réseau de neurone binaire. Les performances obtenues explosent celles de l’état de l’art, en particulier celles des réseaux de Hopfield. De plus, le modèle proposé est biologiquement bien plus plausible : Il utilise des connexions binaires creuses entre populations de neurones, lesquels n’utilisent que deux opérations ; la sommation et la sélection du maximum.

Télécharger le manuscrit.
Télécharger le support de présentation.

Bibtex
@inproceedings{GriBer20114,
  author = {Vincent Gripon and Claude Berrou},
  title = {A simple and efficient way to store many
messages using neural cliques},
  booktitle = {Proceedings of IEEE Symposium on
Computational Intelligence, Cognitive Algorithms,
Mind, and Brain},
  year = {2011},
  pages = {54--58},
  address = {Paris, France},
  month = {April},
}

Coded Hopfield networks

C. Berrou et V. Gripon, "Coded Hopfield networks," dans Proceedings of 6" International Symposium on Turbo Codes and Iterative Information Processing, Brest, France, pp. 1--5, septembre 2010.

Du codage correcteur d’erreurs est introduit dans des mémoires associatives du type réseaux de Hopfield afin d’augmenter la diversité d’apprentissage ainsi que la robustesse en cas d’erreurs ou d’effacements. Pour y parvenir, le graphe associé au réseau de Hopfield classique est transformé en un graphe biparti dont l’information entrante est projetée sur des codes orthogonaux ou quasi-orthogonaux. Bien que l’apprentissage soit similaire à celui des réseaux de Hopfield classiques (règle de Hebb), le processus de remémoration s’appuie quant à lui sur le décodage correcteur d’erreurs qui offre de fortes propriétés de discrimination entre les messages appris.

Télécharger le manuscrit.

Bibtex
@inproceedings{BerGri20109,
  author = {Claude Berrou and Vincent Gripon},
  title = {Coded Hopfield networks},
  booktitle = {Proceedings of 6" International
Symposium on Turbo Codes and Iterative Information
Processing},
  year = {2010},
  pages = {1--5},
  address = {Brest, France},
  month = {September},
}

Qualitative Concurrent Stochastic Games with Imperfect Information

V. Gripon et O. Serre, "Qualitative Concurrent Stochastic Games with Imperfect Information," dans Proceedings of 36th International Colloquium of Automata, Languages and Programming, Springer, Lecture Notes in Computer Science, Rhodes, Grèce, pp. 200--211, juillet 2009.

Nous étudions un modèle de jeux qui combine les aspects concurent, information imparfaite et stochastique. Ce sont des jeux à espace fini d’états dans lesquels, à chaque tour, les deux joueurs choisissent simultanément et indépendamment une action. Ensuite, un état successeur est choisi à l’aide d’une distribution probabiliste dépendant de l’état précédent et de la paire d’actions choisies par les joueurs. L’information imparfaite est modélisée comme suit : Chacun des deux joueurs possède une relation d’équivalence sur les états et ne peut observer que la classe d’équivalence de l’état courant. Ainsi un joueur devra se comporter de la même façon dans deux parties qui lui paraissent indistinguables. Nous considérons l’accessibilité (la partie passe-t-elle par un état final ?) et l’objectif de Buchi (la partie passe-t-elle infiniment souvent par un état final ?). Notre contribution principale est de montrer que le problème de décider si le premier joueur a une stratégie qui lui assure de gagner presque sûrement contre toute stratégie du second joueur est complet en 2-ExpTime. Nous caractérisons également ces stratégies qui lui permettent de gagner presque sûrement.

Télécharger le manuscrit.

Bibtex
@inproceedings{GriSer20097,
  author = {Vincent Gripon and Olivier Serre},
  title = {Qualitative Concurrent Stochastic Games
with Imperfect Information},
  booktitle = {Proceedings of 36th International
Colloquium of Automata, Languages and Programming},
  year = {2009},
  editor = {Springer},
  pages = {200--211},
  series = {Lecture Notes in Computer Science},
  address = {Rhodes, Greece},
  month = {July},
}




Vous êtes le 1975609ème visiteur

Site de Vincent Gripon