The dark side of AI democratization: You no longer need to be a hacker to hack
Generative AI promises a future where you no longer need to be a skilled writer to draft a story or a trained software engineer to code. But there’s a dark side to this democratization: AI is enabling people with little technological know-how to become cybercriminals.
I’m a cybersecurity researcher who monitors the darknet — the shadowy area of the Internet where people can buy illegal goods such as guns, drugs and child pornography. Recently, I’ve noticed a worrying trend: People are selling increasingly powerful, AI-driven hacking tools with the potential to cause enormous damage.
Novices with little hacking experience can now use AI-generated phishing content, malware and more to target everything from individual bank accounts to power plants. Easier access to hacking tools is especially dangerous, as more physical devices and systems, from cars to toothbrushes to the electric grid, are connected to the Internet and open themselves up to attacks. The “Flipper Zero,” a small device anyone can use to hack traffic lights, is an early example of the threat that amateur hackers can pose to physical systems.
The democratization of AI, including through open-source platforms, has major benefits. When anyone can experiment with the technology, it enables entrepreneurship and innovation and prevents monopolization by big tech companies. At the same time, open AI models can be bootstrapped for nefarious purposes.
Rather than cage AI, we can fight back by deploying advanced AI cybersecurity tools and updating our defensive strategies to better monitor hacking communities on the darknet.
Companies like Google, OpenAI and Microsoft put guardrails on their products to ensure AI isn’t used to hack, produce explicit content, guide the creation of weapons or engage in other illegal behavior. Yet the proliferation of hacking resources, sexual deepfakes and other illicit content made using AI suggests that bad actors are still finding ways to cause harm.
One path hackers that use makes indirect queries to large language models such as ChatGPT that bypass safeguards. A hacker may disguise a request for content in a way that the AI fails to recognize as malicious, leading the system to produce phishing materials or violent content. Alternatively, a strategy known as “prompt injection” can trick the large language model into leaking information from other chatbot users.
Hackers can also build alternative chatbots using open-source AI models — think ChatGPT but without guardrails. FraudGPT and WormGPT craft convincing phishing emails and give advice about hacking techniques. Some people are using jerry-rigged large language models to generate deepfakes of child pornography. This is just the start. One hacking forum I recently reviewed points to a rapidly growing class of large language models explicitly designed to cause harm.
Creating programs like FraudGPT and WormGPT requires technical knowledge. Using them doesn’t. Amateur hackers, called “script kiddies,” can execute hacking scripts without any technical skills. I recently used WhiteRabbitNeo, a cybersecurity tool that can also act as a WormGPT and FraudGPT alternative, to help me crash a Windows 11 computer. Not only did it do a pretty good job at generating a script, it also gave me instructions on how to deploy it.
If we want people to be able to adapt and experiment with generative AI, they will inevitably use it for both good and bad. We should continue to explore regulations to punish the misuse of AI. But placing limits on open-source AI models will limit creative and beneficial uses, while hackers who don’t care about intellectual property rights and guardrails will continue to find workarounds.
Our best weapon is to fight fire with fire by using AI as a defensive cybersecurity tool. Cybersecurity has long been a whack-a-mole game: A new threat emerges, humans update software to address the threat, another threat emerges, and so on. We’ve often relied on white-hat hackers probing vulnerabilities one by one rather than systematic approaches to discovering flaws.
AI can help us continuously learn and respond to threats with greater agility. One of its greatest strengths is pattern recognition, which can be used to automate monitoring of networks and more easily identify potentially harmful activity. AI can compile emerging threats in a database and generate summaries of attempted attacks.
We’re already seeing a wave of AI-powered cybersecurity. CloudFlare is using AI to track other AI and block bots from scraping content. Mandiant is using AI to investigate cybersecurity incidents. IBM deploys AI to accelerate threat detection and mitigation.
As companies build out AI security tools, we should continue monitoring “dark web” and hacker communities to keep tabs on the latest malware being offered and create proactive fixes. One benefit of resources aimed at “script kiddies” is that they are offered in places visible to amateurs, which are typically visible to researchers too. Many of these communities operate in languages other than English. To ensure AI cybersecurity can adapt to global threats, we must invest more in multilingual large language models. Currently, disproportionate resources go to developing English language models.
The CloudStrike outage earlier this year reminded us of the fragility of global cyber infrastructure. One bad update from a company with good intentions was enough to cause billions of dollars in damage, bring air travel to a halt, and crash 911 emergency services. Now imagine AI hacking tools in the hands of anyone around the world who wants to cause harm — and in an age of Internet connected products and systems where more of the things we own now contain a chip.
We shouldn’t wall off access to generative AI and all the incredible things it can do. But we must use AI strategically to stay one step ahead of the threats it will inevitably bring.
Victor Benjamin is an assistant professor of information systems at the W.P. Carey School of Business at Arizona State University.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..