An Airbnb security expert outlines how automation, AI and machine learning can support a security strategy, and where humans play a role.
Artificial intelligence, machine learning and automation are finding a warm welcome when utilized for IT security. Yet, that doesn’t mean humans are being kicked to the curb.
Rather, enterprise security managers need to understand the different roles that can be played by AI and by people. Each entity is good at certain tasks, but not so strong on others. That message came through in a presentation by Vijaya Kaza, chief security officer and head of engineering and data science for trust and safety at Airbnb. She recently spoke during Sumo Logic’s Illuminate conference.
For starters, managers must acknowledge the challenge presented to security teams by today’s torrent of data and online activity in any enterprise. Companies cannot overcome that issue without the help of automation, according to Kaza. She noted that AI and machine learning are crucial to doing “the basics”
“From my own practical experience, I can tell you that outside of all the hype and the excitement around these solutions, getting the basics right and getting the basics at scale is really top of mind when it comes to automation in this urgent and important matrix that is certainly top of my list. “
See Also: Continuous Intelligence Insights
Security automation and the basics
Those basics include tasks such as patch management, vulnerability management. “Basic security hygiene is not necessarily the most exciting thing, but it is so important,” said Kaza.
Automation contributes in several ways, according to Kaza. It’s a way to ensure quality and consistency and to avoid human errors. For example, automated templates and “hardened configurations” provide guidance when setting up new environments.
Citing another use case, Kaza outlined how smart automation is valuable in “enhancing user experience as a way to improve security posture”. Getting access controls right from the start can help end users or IT staff work efficiently, work within security guidelines, and stay happy while still protecting systems.
It isn’t easy to employ security automation for security, said Kaza when asked about automation’s challenges. She mentioned the “3 T’s” of time, talent, and technology. However, she added that culture – getting people to accept automation and security – and process also are important.
“Automation, if you think about it, really works well when you have the underlying workflow or process nailed really well. Then you are just using automation to codify some of those practices so you are being more efficient,” she said.
Take the example of patching. If there’s a culture in which development teams don’t patch on a regular basis, there’s no point in automating things like ticket generation. So, patching when it’s needed helps support security goals.
The us and them of automation
Where does the job of automation end and the role of humans start?
Experts advocate for end-to-end automation, but that can turn into a monumental task, particularly if elements such as legacy systems are in play, said Kaza. A more reasonable approach is to automate certain parts of the workflow but keep human intervention in the process.
She noted the example of robotic process automation. If a company has specific rules about system configuration automation may detect that a user is running an out of date operating system and issue an alert. Then the mitigation would be left to humans. “We’re several years away from taking the human out of the look completely,” she said.
Besides the challenge of balancing the roles of people and automation, there are other hurdles that organizations face when it comes to scaling AI and machine learning. One is the data, particularly the data sets used to train ML. Then, there is the issue of system and infrastructure performance.
AI is being used extensively in security today for supervised machine learning applications. Systems are being trained for tasks such as threat detection, anomaly detection, and asset discovery. Unsupervised learning is being used to detect certain patterns that may signal issues such as bots. Not requiring training data, unsupervised ML focuses on clustering common data points.
“The technology behind machine learning isn’t the complication, but it’s complicated to get it implemented at scale, able to scale over time,” said Kaza, citing the need to minimize false positives. That’s where companies need access to large amounts of training data, and data quality remains a challenge. It’s difficult to identify a “single point of truth,” she said.
Security automation’s challenges
Then there is the challenge of providing the horsepower to detect security problems in real time. That requires a strong data science and data engineering practice, as well as investing in the right underlying infrastructure.
Many of the threats that organizations worry about need real-time detection, which demands high performance and low latency. Simply, most threats don’t wait.
Looking to the future and the balance of humans and automation, Kaza said:
“Imagine a future where you have the AI brain sitting in the center of your security operations, if you will. It’s watching of your environments, the data the flows, the users, the assets, and really understanding the big picture of what’s happening and learning it on its own. And if something changes that AI brain could immediately give you the recommendations proactively to say here’s how you can do this.”