Skip to main content
OpenConf small logo

Providing all your submission and review needs
Abstract and paper submission, peer-review, discussion, shepherding, program, proceedings, and much more

Worldwide & Multilingual
OpenConf has powered thousands of events and journals in over 100 countries and more than a dozen languages.

A Generative Ai and Cost-Sensitive Learning Framework For Robust Intrusion Detection Against Adversarial Attacks

Adversarial attacks pose significant challenges to the reliability of Deep Learning (DL)-based Intrusion Detection Systems (IDSs) by exploiting model vulnerabilities. This study evaluates the performances of a DL-based IDS under three challenges: data cleaning, adversarial attacks using the Fast Gradient Sign Method (FGSM), and adversarial training augmented with synthetic samples generated via Conditional Tabular Generative Adversarial Networks (CTGAN). To improve the model performances, a cost-sensitive framework is integrated, prioritizing the accurate detection of minority classes. Results demonstrate a significant decline in accuracy and an increase in False Positive and False Negative Rates under adversarial conditions. The application of CTGAN-based data augmentation combined with cost-sensitive adversarial training effectively mitigates these impacts, improving the system’s resilience against FGSM attacks. These findings emphasize the importance of synthetic data augmentation and cost-sensitive approaches in bolstering IDS defenses against evasion attacks in dynamic and imbalanced network environments.

Hind Haqqoun
University Mohammed V
Morocco

Ali Idri
University Mohammed V
Morocco

Houssam Zouhri
University Mohammed V
Morocco