Its chatbot can provide explanations on ways hackers could breach a system, and supply methods to prevent or deter this from happening.
Google has quite a few cybersecurity operations which it has launched over the past few years, the two major ones being cyber intelligence unit Mandiant and security operations platform Chronicle. These two, alongside a few other services, will be folded into a new unified platform, called Google Cloud Security AI Workbench.
The unified platform will have a generative AI chatbot similar to ChatGPT, which uses a modified version of Google’s PaLM model adapted for security. With it, users can upload lines of code for analysis, while also having conversations with the chatbot without the need for a specialized vocabulary. The chatbot can provide explanations on ways hackers could breach a system, and supply methods to prevent or deter this from happening.
SEE ALSO: Cybersecurity Will Shift in 2023 Thanks to AI
“We’ve trained the large language model (LLM) on all of the Mandiant threat intel data and the Google threat intel data so you can create an ‘industry-first security LLM’ but ensconce it in an enterprise-grade platform that we’re calling the Security AI Workbench,” said Sunil Potti, vice president and general manager of security at Google Cloud at RSAC 2023.
Google worked with its subsidiary DeepMind to build this chatbot, one of the many new projects expected to come from collaborative efforts between both. DeepMind and the Google Brain team recently combined efforts, a move by Google to refocus its artificial intelligence efforts.
“We do a lot of work around security for preserving our consumer space, as well as our enterprise customers, so we thought, can we do something in the world of generative AI-based applicability, but do it in a way that could be more than just a product?,” said Potti to The Wall Street Journal.
Alongside the generative AI system, users can receive breach alerts from Mandiant and browse Chronicle’s large library of security data, all from a single dashboard. We anticipate that more cybersecurity applications will be added to the workbench in the future.
Google is allowing partners to collaborate on the development and improvement of its AI system. Accepted firms will be able to upload code and further improve the system’s detection and generative systems. Accenture is the first partner to be sign up at launch, and Potti expects a few more big names to join in the summer.
Having an AI check and create code for programmers is still quite a strange concept for some, although early results from OpenAI and Google seem to be positive. The current sales pitch is it can be used as an assistant, able to read code, spot mistakes, and provide alternatives, but the human programmer is still ultimately in charge of the process.
Whether that will be true in five years is another question, especially with the talent to demand disparity in most programming sectors, especially cybersecurity and cloud. Businesses may look to employ an AI instead of a specialist, and allow more general or low-code technicians to implement security code from AI services.