Generative Artificial Intelligence In Social Engineering: Empowering Deepfake Attacks
Generative artificial intelligence has enhanced social engineering attacks through Deepfakes techniques, which create realistic audio, video, and text forgeries. This paper analyzes the state of the art on the use of AI in social engineering attacks, identifying methods, tools, application scenarios, emerging trends, and research gaps. The methodology follows the PRISMA model, using databases such as ACM Digital Library, IEEE Xplore, ScienceDirect, Scopus, and Web of Science, with searches conducted between 2018 and 2025. The results show an exponential increase in studies from 2023 onwards, with emphasis on tools such as DeepFaceLab, Face Swap, Deep Voice, WormGPT, FraudGPT, and key application scenarios include cases of financial fraud, political disinformation, and manipulation of personal content. Although defensive technologies continue to evolve, significant challenges remain, requiring advances in multimodal detection, robust regulation, and public awareness.
