Site de Vincent Gripon

Blog sur mes recherches et mon enseignement

Towards an Intrinsic Definition of Robustness for a Classifier

T. Giraudon, V. Gripon, M. Löwe et F. Vermet, "Towards an Intrinsic Definition of Robustness for a Classifier," dans IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4015--4019, 2021.

The robustness of classifiers has become a question of paramount importance in the past few years. Indeed, it has been shown that state-of-the-art deep learning architectures can easily be fooled with imperceptible changes to their inputs. Therefore, finding good measures of robustness of a trained classifier is a key issue in the field. In this paper, we point out that averaging the radius of robustness of samples in a validation set is a statistically weak measure. We propose instead to weight the importance of samples depending on their difficulty. We motivate the proposed score by a theoretical case study using logistic regression, where we show that the proposed score is independent of the choice of the samples it is evaluated upon. We also empirically demonstrate the ability of the proposed score to measure robustness of classifiers with little dependence on the choice of samples in more complex settings, including deep convolutional neural networks and real datasets.

Télécharger le manuscrit.

Bibtex
@inproceedings{GirGriLöVer2021,
  author = {Théo Giraudon and Vincent Gripon and
Matthias Löwe and Franck Vermet},
  title = {Towards an Intrinsic Definition of
Robustness for a Classifier},
  booktitle = {IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP)},
  year = {2021},
  pages = {4015--4019},
}




Vous êtes le 1985614ème visiteur

Site de Vincent Gripon