Large Language Models and Their Abuse in High-Level Social Engineering Campaigns
DOI:
https://doi.org/10.61446/ds.4.2025.10444Keywords:
Large Language Models, Social Engineering, Adversarial AI, Phishing, Deception, Human Vulnerabilities, CI/CD Security, Supply ChainAbstract
The rapid evolution and widespread accessibility of Large Language Models (LLMs) has transformed the cyber threat landscape. While LLMs deliver major benefits to productivity, code acceleration, knowledge augmentation, and domain translation, they simultaneously enable a new generation of high-level, linguistically precise cyber deception operations. This paper examines the shift in social engineering strategy induced by generative models, analyzing how adversaries now leverage AI to produce contextually aligned, psychologically adaptive, multilingual attacks at scale — bypassing traditional anti-phishing controls. The paper also conceptually integrates LLM-based social engineering with emerging research showing adversarial AI misuse inside CI/CD supply chains, demonstrating that human trust manipulation and machine trust manipulation are converging into a single strategic threat dimension. The result is a unified adversarial model, where linguistic credibility becomes a scalable commodity weapon across human and automated domains. This research proposes a taxonomy of LLM-augmented social engineering attack classes, maps cognitive persuasion levers to MITRE ATT&CK technique paths, and defines a dual-plane evaluation methodology measuring both behavioral technique disruption and cognitive persuasion disruption. Findings suggest that defensive strategy must shift toward AI-augmented detection, adversarial linguistics analysis, supply-chain integrity reinforcement, and continuous cognitive resilience engineering.







