The Rise of Dark AI: A New Cybercrime Weapon

Artificial intelligence (AI) is rapidly becoming an indispensable part of modern life. However, the rapid development of this technology also comes with unforeseen dangers, particularly the emergence of Dark AI—a new weapon for cybercriminals. Dark AI is not an entirely new technology, but rather how criminals are abusing and distorting AI models for illegal purposes.
According to Sergey Lozhkin, Head of the Global Research and Analysis Team (GReAT) at Kaspersky, we are entering a new era of cybersecurity where AI acts as a “shield” for defense, while Dark AI becomes a dangerous “weapon.” Dark AI is defined as the deployment of large language models (LLMs) outside of controlled and secure systems, serving unethical and illegal ends.
Unlike legitimate AI platforms with strict protection mechanisms, Dark AI is often exploited for scams, manipulation, cyberattacks, or data harvesting without any supervision. The rise of Dark AI is linked to the appearance of Black Hat GPT models in mid-2023. These are customized AI variants designed to generate malware, craft highly convincing phishing emails, create deepfake voices and videos, and even help hackers execute simulated attacks. Notable names in the cybercriminal community include WormGPT, DarkBard, FraudGPT, and Xanthorox.
Impact on Internet Users and Youth
The proliferation of Dark AI is creating significant challenges for internet users, especially for young people who are frequent users of social media and online platforms.
- Increased risk of scams: Dark AI tools can create highly convincing fraudulent content, from fake bank emails and forged messages from relatives to scam calls using deepfake voices. Users can easily be deceived and lose money or personal information.
- Information and psychological manipulation: Dark AI can rapidly produce massive amounts of fake content and disinformation, causing public confusion and mistrust. Young people, who have access to diverse information but lack experience in vetting sources, can be easily psychologically manipulated and led astray.
- Threats to privacy: Dark AI tools can be used to illegally collect and exploit users’ personal data. This information can be sold to third parties or used for other illicit purposes, posing a serious threat to individual privacy.
Expert Recommendations
To combat the dangers of Dark AI, cybersecurity experts advise both organizations and individuals to enhance their discipline and knowledge of cybersecurity.
For Individuals and Youth:
- Increase vigilance: Always be wary of suspicious emails, messages, or calls. Do not click on strange links and do not provide personal information without proper verification.
- Verify information carefully: Before sharing or trusting any information, especially news on social media, verify its authenticity from multiple reputable sources.
- Secure personal accounts: Use strong and unique passwords for each account. Enable two-factor authentication (2FA) to enhance security.
For Organizations:
- Tighten access control: Train employees on cybersecurity risks and limit Shadow AI—the unauthorized use of AI tools by internal staff, which can lead to data leaks.
- Invest in security solutions: Deploy AI-powered threat detection solutions and establish a Security Operations Center (SOC) to proactively identify and counter attacks.
The rise of Dark AI highlights a stark reality: AI itself does not distinguish between right and wrong; it simply follows instructions. Therefore, every individual and organization must proactively update their knowledge and strengthen their preventative measures to avoid becoming a victim of cyberattacks in this new era.