The rise of artificial intelligence has brought about incredible advancements in various industries, but it has also given rise to new threats in the world of cybersecurity. The FBI’s 2021 Internet Crime Report revealed that phishing remains the most common IT threat in America, and with the advent of OpenAI’s ChatGPT, hackers have gained access to an unprecedented tool to bolster their phishing campaigns. With near fluency in English, bad actors can use ChatGPT to generate convincing phishing emails and even hacking code. The potential for ChatGPT to be hacked itself and used to spread dangerous misinformation and propaganda is also a cause for concern. In a Hardvard Business Review new article “The New Risks ChatGPT Poses to Cybersecurity“, they are examining new risks, exploring the necessary training and tools for cybersecurity professionals to respond, and urge government oversight to ensure AI usage doesn’t undermine cybersecurity efforts.
The introduction of OpenAI’s groundbreaking AI language model ChatGPT in November last year stunned the world with its capabilities, drawing in millions of users. However, as with any innovative technology, concerns soon emerged surrounding its potential use by malicious actors. In particular, cybersecurity professionals have raised alarm bells about the new pathways ChatGPT creates for hackers to potentially breach advanced cybersecurity software.
This is especially worrying for a sector that is already grappling with a 38% global increase in data breaches in 2022. With ChatGPT’s sophistication and potential reach, leaders must take a proactive approach to combat this new threat. The first step is to identify the key risks posed by ChatGPT’s widespread use, HBR writes.
Phishing scams and consumer oriented AIs
One major risk is the rise of AI-generated phishing scams. While less advanced versions of language-based AI have been publicly available for years, ChatGPT is leagues ahead in terms of sophistication. Its ability to converse fluently with users without making spelling, grammatical, and verb tense mistakes makes it difficult to distinguish from a real person. For hackers, this represents a significant game-changer.
To address these risks, cybersecurity professionals need adequate training and tools to detect and respond to attacks facilitated by ChatGPT. This requires concerted efforts by both industry and government to develop and implement the necessary measures.
Ultimately, HBR thinks government oversight is crucial to ensure that AI usage doesn’t undermine cybersecurity efforts. It is essential that policymakers take a proactive role in regulating AI, given its potential impact on society. By doing so, we can mitigate the risks posed by ChatGPT and other advanced AI technologies, and protect our digital infrastructure from malicious actors.
Malicious content from AI
ChatGPT’s proficiency in generating computer programming tools has given bad actors a potential new tool to aid their hacking campaigns. While the AI is programmed not to generate code that is malicious or intended for hacking purposes, manipulation of ChatGPT is certainly possible. In fact, hackers are already scheming to trick the AI into generating hacking code, which could lead to a new era of cybersecurity threats. Cybersecurity professionals need continuous upskilling and resources to respond to these threats and to equip themselves with AI technology to better spot and defend against AI-generated hacker code. While there is concern about the power ChatGPT provides to bad actors, it’s important to remember that this same power is equally available to good actors. As this technology evolves, we must examine these possibilities and create new training to keep up.
Compliance and regulatory issues
ChatGPT’s potential to be hacked and disseminate dangerous misinformation is a topic that is recently often discussed in conversations about AI security. If bad actors were to manipulate the AI to provide seemingly objective but biased information, ChatGPT could become a dangerous propaganda machine. This possibility highlights the need for enhanced government oversight of advanced AI tools and companies like OpenAI. The launch of ChatGPT and other generative AI products requires regular security reviews and minimum-security measures to reduce the risk of hacking.
To prevent unwieldy technology, a shift in our mindset and attitude toward AI is required. Developers need to ensure that their tools have an ethical programmatic core that prohibits manipulation before making them available to the public. Standards must be established to hold developers accountable for failing to uphold these ethical principles. Organizations have instituted agnostic standards to ensure the safety and ethics of exchanges across different technologies, and the same principles should be applied to generative AI.