Google has introduced a new cybersecurity suite called Cloud Security AI Workbench, which is powered by a specialized “security” AI language model called Sec-PaLM. This cybersecurity suite is aimed at leveraging the benefits of generative AI for cybersecurity purposes. Sec-PaLM is an offshoot of Google’s PaLM model, with a focus on security use cases, and incorporates security intelligence such as research on software vulnerabilities, malware, threat indicators and behavioral threat actor profiles.
Cloud Security AI Workbench comprises various AI-powered tools, including Mandiant’s Threat Intelligence AI, which leverages Sec-PaLM to identify, summarize and act on security threats. VirusTotal, another Google property, uses Sec-PaLM to help subscribers analyze and explain the behavior of malicious scripts. Furthermore, Sec-PaLM assists customers of Chronicle, Google’s cloud cybersecurity service, in searching security events and interacting “conservationally” with the results. Google’s Security Command Center AI users will also benefit from Sec-PaLM, receiving “human-readable” explanations of attack exposure, including impacted assets, recommended mitigations and risk summaries for security, compliance and privacy findings.
While the use of generative AI for cybersecurity purposes has become a new trend in the generative AI space, the technology also poses risks. Deepfake attacks, where artificial intelligence is used to create fake audio and video, are becoming more prevalent. Hackers could use generative AI to create fake audio and video recordings of executives or high-profile individuals, potentially causing reputational damage or financial loss. While AI can help enhance cybersecurity, there is a need to develop countermeasures against its malicious use, and companies need to ensure that they are using AI in a responsible and ethical manner.
Google touted its Sec-PaLM tool as a product of years of foundational AI research by Google and DeepMind, and the expertise of their security teams. Google expressed enthusiasm for the potential of generative AI in the security field and plans to continue to leverage this technology to drive advancements across the security community.
However, Google’s ambitions may be premature as their first tool in the Cloud Security AI Workbench, VirusTotal Code Insight, is only available in a limited preview at this time. While “recommended mitigations and risk summaries” may sound promising, it is unclear how effective Sec-PaLM is in practice, and whether the suggestions are actually more precise because an AI model produced them.
One of the major challenges with using AI for cybersecurity is that even the most advanced AI language models can make mistakes and are susceptible to attacks like prompt injection, which can cause them to behave in unintended ways. Despite these challenges, Microsoft recently launched Security Copilot, a new tool that also uses generative AI models from OpenAI, including GPT-4, to “summarize” and “make sense” of threat intelligence, claiming that it will better equip security professionals to combat new threats. However, there is currently a dearth of studies on the effectiveness of generative AI for cybersecurity, so it remains to be seen whether it will live up to the hype.