Document Type

Conference Paper

Disciplines

1.2 COMPUTER AND INFORMATION SCIENCE, Computer Sciences

Publication Details

https://dl.acm.org/doi/10.1145/3590777.3590792

Reilly, C., O'Shaughnessy, S. & Thorpe, C. (2023). Robustness of Image-Based Malware Classification Models trained with Generative Adversarial Networks. EICC '23: Proceedings of the 2023 European Interdisciplinary Cybersecurity Conference, June 2023, Pages 92–99.

https://doi.org/10.1145/3590777.3590792

Abstract

As malware continues to evolve, deep learning models are increasingly used for malware detection and classification, including image based classification. However, adversarial attacks can be used to perturb images so as to evade detection by these models. This study investigates the effectiveness of training deep learning models with Generative Adversarial Network-generated data to improve their robustness against such attacks. Two image conversion methods, byte plot and space-filling curves, were used to represent the malware samples, and a ResNet-50 architecture was used to train models on the image datasets. The models were then tested against a projected gradient descent attack. It was found that without GAN generated data, the models’ prediction performance drastically decreased from 93-95% to 4.5% accuracy. However, the addition of adversarial images to the training data almost doubled the accuracy of the models. This study highlights the potential benefits of incorporating GAN-generated data in the training of deep learning models to improve their robustness against adversarial attacks.

DOI

https://doi.org/10.1145/3590777.3590792

Funder

This research received no external funding

Creative Commons License

Creative Commons Attribution-Share Alike 4.0 International License
This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.


Share

COinS