Artificial intelligence (AI) is rapidlytransforming various aspects of our lives, driving increased efficiency andautomation. However, this technological advancement also presents significantchallenges to cybersecurity. Cybercriminals, unconstrained by ethicalconsiderations, are increasingly leveraging AI for malicious purposes, withsocial engineering attacks being a prime target. The growing accessibility ofAI tools further exacerbates this issue, making it easier for even lesssophisticated actors to deploy these tactics.
Artificial intelligence (AI) is rapidly transforming various aspects of our lives, driving increased efficiency and automation. However, this technological advancement also presents significant challenges to cybersecurity. Cybercriminals, unconstrained by ethical considerations, are increasingly leveraging AI for malicious purposes, with social engineering attacks being a prime target. The growing accessibility ofAI tools further exacerbates this issue, making it easier for even less sophisticated actors to deploy these tactics.
This underscores the need for security experts worldwide to develop innovative and proactive defense strategies against AI based attacks.
Especially to perform social engineering attacks, AI has the potential to be a catalyst, and these types of attacks are expected to increase massively in the future with the help of AI.
Social engineering is a perfidious method where attackers exploit human psychology and behavior to gain access to sensitive information, systems or to trigger an action. Instead of technical hacking methods, they rely on manipulation, deception, and exploiting trust, helpfulness, or belief in authority. The common pattern involves four phases:(1) collect information about the target; (2) develop a relationship with the target; (3) exploit the available information and execute the attack; and (4)exit with no traces (cf. Fatima Salahdine and Naima Kaabouch, Social Engineering Attacks: A Survey). The goal is to trick victims into revealing passwords, installing malware, transferring money, or granting unauthorized access.
Technical measures offer limited protection against such attacks, making awareness crucial, and it is important for companies to train their employees accordingly.
This article primarily focusses on vishing (voice phishing[1])attacks as a specific form of social engineering. AI agents pose a significant threat to these types of attacks in the future.
For those that are not familiar with the term AI-Agent: AI agents are autonomous or sometimes semi-autonomous systems designed to perceive their environments and act to achieve set goals, thus shaping future interactions with the environment. These agents can use the power of LLMs to plan tasks, trigger task execution, make decisions, and interact meaningfully with the world. Unlike basic LLM applications, an AI-agent using LLMs follows a cyclic approach to achieve its end goal, continuously learning and adapting from its findings and adjusting its approach. This iterative self-adaptation makes the agent effective at solving complex problems through a multistep process until the task is completed (CSA Paper Using AI for Offensive Security).
AI-Agents might be used in various vishing scenarios. They can be used in both corporate contexts and for attacks in the private sphere.In the private sphere, they might be used to impersonate family members in distress to obtain money transfers to the fraudsters. In this case especially voice cloning poses a threat.
In the corporate context, attacks are deployed in different ways. Popular methods include posing as IT support staff (e.g., from Microsoft)or as authorities (e.g., federal police) to obtain protected information and passwords for further attacks. Such attacks are often also referred to as spearphishing. Another method is to impersonate suppliers or C-level employees towards accounting staff to achieve fraudulent payments.
There is now a danger that the use of AI agents will further increase vishing attacks and make them even more difficult to detect. AI significantly amplifies the effectiveness of vishing attacks in several ways:
We are not yet accustomed to such AI scam calls today, which further increases the risk, as a noteworthy study by João Figueiredo etal. shows (João Figueiredo et al., On the Feasibility ofFully AI-automated Vishing Attacks). The study investigates the potential escalation of vishing attacks through AI automation. The authors introduce "ViKing," anAI-powered vishing system that uses large language models (LLMs) for conversation, along with speech-to-text and text-to-speech tools to carry out phone-based social engineering. Through experiments with 240 participants,ViKing extracted sensitive information in 52% of cases. Yet, even when participants were strongly cautioned and even educated to not disclose information, 33% still disclosed sensitive information to ViKing’s bots. Key findings highlight the system's realism and ability to impersonate human-like dialogue, though voice delays and response timing impacted believability. The study underscores the risks posed by accessible AI tools in executing mass-scale vishing campaigns and emphasizes the need for enhanced cyber-awareness and defensive measures.
A comparable study by Fabricio Toapanta etal. came to a similar result in a similar experiment. The success rate was also over 50% and sometimes even reached 70% (cf. Fabricio Toapanta et. al, AI-DrivenVishing Attacks: A Practical Approach).
Beyond automated calls, AI also enables sophisticated voice cloning. With relatively short audio samples, attackers can create convincing replicas of a person's voice (example with Morgan Freeman). This poses a significant threat, especially in targeted attacks like whaling (targeting high-profile individuals) and CEO fraud.
A payment request in an email from a fictitious business partner may not be recognized immediately, but the risk of such fraudulent messages is now known, and payments are only usually made after additional confirmation. Yet, imagine if the CEO himself calls and informs that the CFO will also contact them during the day for second approval, an employee will often no longer dare to question a false invoice, and voice cloning and AI pose a great risk in this scenario as it make such an attack much more believable because employees often will not question the person as the voice sounds familiar. An example where it did not work was an attack where the attacker wanted to impersonate the CEO of Ferrari (full story here). He was exposed due to a personal question. Countering the AI-Powered Threat New security procedures will be needed for such attacks. Especially in large companies, there is not always a personal connection to make a personal inquiry on the spot. Addressing the evolving threat of AI-driven social engineering requires a multi-layered approach:
Conclusion: A Call for Attentiveness AI has significantly raised the stakes in the social engineering game. The combination of automated attacks, enhanced conversational abilities, and voice cloning creates a potent threat landscape. While technical solutions play a role, human awareness and robust security practices remain crucial lines of defense. Continuous education, vigilance, and proactive adaptation to these evolving threats are essential to mitigate the risks posed by AI-powered social engineering.
Authored by Yves Gogniat
[1]Vishing (VoicePhishing) is a form of social engineering in which criminals try to persuade their victims by phone call to reveal sensitive information or perform certain actions. It is essentially the telephone version of the well-known phishing by email.
We are pleased to share some highlights from our most recent event.
Explore the latest AI tech, predict risks, and ensure innovation meets security in the realm of AI.
Read MoreIndustry leaders converge to provide authoritative research, tools, education and certification for AI safety and security.
Read MoreZero Trust is one of the most widely talked about cybersecurity trends today. The world of cybersecurity has come to the conclusion that the traditional security models are insufficient. ZT is a strategy to design to prevent data breaches and stop data exfiltration.
Read More