Written By
Abdulaziz Almaslukh
Senior Researcher, Saudi Information Technology Company (SITE)
- Social engineering cyber attacks are characterized by being automated, adaptive and tailored to their targets.
- AI has been used to carry out social engineering cyber attacks to defraud people and companies of millions of dollars.
- Countering this requires a coordinated approach with cooperation across sectors at its heart.
Generative AI (GenAI) and Large Language Models (LLM) in particular, have taken the world by storm. The technology has shown tremendous potential to automate various day-to-day tasks, ranging from basic IT helpdesk requests to sophisticated user behaviour analysis. This task automation is typically carried out by AI agents — autonomous software that is designed to perform tasks and execute actions. Notably, businesses across industries are increasingly adopting AI tools to increase efficiency and reduce costs.
However, the rise of AI models has also led to the emergence of new cyberattacks that effectively utilize it, known as AI-based attacks. These attacks are characterized by being automated, adaptive and tailored to their targets. The rise of these attacks opens a new arena in cybersecurity and is changing the cybersecurity landscape. In fact, MITRE has introduced the MITRE ATLAS framework as an extension of the widely used MITRE ATT&CK framework to address the adversarial tactics in AI systems.
AI and cybersecurity
Despite the exciting and rapid advancement of AI technologies, their misuse has raised significant concerns. In fact, the World Economic Forum’s Global Risks Report 2024 ranks misinformation and disinformation as the top risk associated with AI technology. Utilizing advanced models, AI has transformed the cybersecurity threat landscape and can cause devastating impact. The ability of AI-based attacks to reason and act renders traditional defense techniques ineffective. This development has paved the way for emerging sophisticated threats at a speed and scale far beyond human capabilities.
The wave of AI-based attacks has been grabbing the headlines in recent years. Deepfake technology is known for generating deceptive content using AI. In Hong Kong, it was used to scam a finance worker into paying $25 million to fraudsters; scammers impersonated the company’s chief financial officer during a video conference call. Deepfake-driven scams like this, which come with negligible costs to threat actors, are expected to accelerate, threatening all businesses at large.
AI-powered social engineering attacks
With an ever-growing online footprint of personal data and the increasing sophistication of AI-based attacks, threat actors are now capable of developing attacks that are more personalized and deceitful. One such attack is the social engineering attack, which is the art of manipulating individuals into revealing confidential information or performing actions that compromise their security. The availability of powerful AI models, particularly LLMs, makes the development of social engineering attacks accessible for historically less capable threat actors. For instance, a voice imitation technique used by a scammer left a mother of a 15-year-old daughter in a terrifying situation when scammers claimed to have kidnapped her daughter, who was actually safe.
Digital deception has advanced significantly, contributing to the new frontier of social engineering attacks. The implications of social engineering attacks on digital assets are serious and include financial loss and privacy breaches. These attacks can be carried out through different mediums such as emails, phishing websites, text messages, voice or video calls and social media platforms. Social engineering attacks, generally, rely heavily on exploiting human-centric vulnerabilities rather than the shortcomings of digital infrastructure security.
While solid defense techniques against AI-based attacks are still in the making, the number of cybersecurity incidents involving AI techniques has increased significantly. Becoming more powerful and often highly successful, AI-based attacks have rapidly enhanced their capabilities, making the mission of securing environments more difficult. But that does not mean the scammers and hackers have won.
Addressing concerns related to AI-based attacks
Addressing concerns from AI-based attacks could be achieved in three ways. First, it is essential to fully understand the effectiveness of the current state of cybersecurity controls against the emerging AI-based cyberattacks. This will help in establishing global countermeasures by having immediate dialogues and exchanging information between organizations before AI-based attacks evolve further.
Second, defending our critical assets depends on improving the current security measures, developing solid defenses against these emerging attacks and educating and raising the community awareness of the new forms of techniques. Moreover, revisiting the existing frameworks and updating them in response to the new AI-based attacks is a significant step towards safeguarding valuable assets.
Finally, the entire ecosystem, including governments and leading technology players, should synergize to support research centers, startups and small and medium-sized enterprises that focus on the intersection of AI and cybersecurity. This investment is a key driver that could have the potential to uncover groundbreaking solutions — just like how OpenAI revolutionized AI before any established players had. This is an urgent demand for more cooperative initiatives on the global stage to ensure that cybersecurity defenses evolve faster than the rising threats of these new AI-based attacks.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.