A Generative Ai and Cost-Sensitive Learning Framework For Robust Intrusion Detection Against Adversarial Attacks
Adversarial attacks pose significant challenges to the reliability of Deep Learning (DL)-based Intrusion Detection Systems (IDSs) by exploiting model vulnerabilities. This study evaluates the performances of a DL-based IDS under three challenges: data cleaning, adversarial attacks using the Fast Gradient Sign Method (FGSM), and adversarial training augmented with synthetic samples generated via Conditional Tabular Generative Adversarial Networks (CTGAN). To improve the model performances, a cost-sensitive framework is integrated, prioritizing the accurate detection of minority classes. Results demonstrate a significant decline in accuracy and an increase in False Positive and False Negative Rates under adversarial conditions. The application of CTGAN-based data augmentation combined with cost-sensitive adversarial training effectively mitigates these impacts, improving the system’s resilience against FGSM attacks. These findings emphasize the importance of synthetic data augmentation and cost-sensitive approaches in bolstering IDS defenses against evasion attacks in dynamic and imbalanced network environments.
