Accéder directement au contenu Accéder directement à la navigation
Article dans une revue

Variational Autoencoder for Image-Based Augmentation of Eye-Tracking Data

Abstract : Over the past decade, deep learning has achieved unprecedented successes in a diversity of application domains, given large-scale datasets. However, particular domains, such as healthcare, inherently suffer from data paucity and imbalance. Moreover, datasets could be largely inaccessible due to privacy concerns, or lack of data-sharing incentives. Such challenges have attached significance to the application of generative modeling and data augmentation in that domain. In this context, this study explores a machine learning-based approach for generating synthetic eye-tracking data. We explore a novel application of variational autoencoders (VAEs) in this regard. More specifically, a VAE model is trained to generate an image-based representation of the eye-tracking output, so-called scanpaths. Overall, our results validate that the VAE model could generate a plausible output from a limited dataset. Finally, it is empirically demonstrated that such approach could be employed as a mechanism for data augmentation to improve the performance in classification tasks.
Type de document :
Article dans une revue
Liste complète des métadonnées

https://hal-u-picardie.archives-ouvertes.fr/hal-03601452
Contributeur : Louise DESSAIVRE Connectez-vous pour contacter le contributeur
Soumis le : mardi 8 mars 2022 - 11:40:29
Dernière modification le : mardi 6 septembre 2022 - 11:12:23

Lien texte intégral

Identifiants

Citation

Mahmoud Elbattah, Colm Loughnane, Jean-Luc Guerin, Romuald Carette, Federica Cilia, et al.. Variational Autoencoder for Image-Based Augmentation of Eye-Tracking Data. Journal of Imaging, MDPI, 2021, 7 (5), ⟨10.3390/jimaging7050083⟩. ⟨hal-03601452⟩

Partager

Métriques

Consultations de la notice

19