by Tom Groenfeldt, Forbes
An artificial intelligence engine can do a much better job of detecting security threats when it has a little help from a human, according to Kalyan Veeramachaneni, principal research scientist, Laboratory for Information and Decision Systems at MIT.
“Unsupervised learning is not enough,” he said during a presentation at SWIFT’s Sibos conference in Geneva. A security analyst can play a key role in identifying security threats that computers and data scientists might miss, because they aren’t experts in security.
Collaborating with PatternEx, a start up in the infosec space, Veeramachaneni set out to build an interactive system that would get feedback from a security analyst through a supervised learning model. “We are replicating what an analyst would say — we call it the virtual analyst.” The model captures the knowledge of a security analyst and tries to predict whether activity constitutes an attack, something he called an augmented system.
Unsupervised learning or outlier detection systems can only point out what is or isn’t an outlier, it becomes the first filter to use when working with data from 30,000 users and several million log lines. An analyst looking at activity, provides a subjective assessment and intuition, Veeramachaneni said, and he can look at multiple pieces of information simultaneously and pull in external sources of information to inform the system if it is an attack or not. Sometimes activities with low score of being an outlier can be an attack, as they found out. The converse is also true.
Veeramachaneni calls this expert sourcing, as distinct from crowd sourcing. In some artificial intelligence programs, like image recognition, everyone can give useful feedback, he said. “It doesn’t work like that for log tables in security. We have a very small pool of people who are security experts.”
In an experimental setup using data from a massive retailer, PatternEx used 3.6 billion log lines, 70.2 million entities and 318 known attacks. Using outlier detection alone identified just 18% while an augmented AI system found 85%.
Veeramachaneni is also co-founder of PatternEx www.patternex.com which is partner in this research and has a commercial product for enterprises. His comments on people and computers working together addressed a concern raised in another session by Amber Case, a fellow at Harvard’s Berkman Klein Center.
“I don’t like the term artificial intelligence,” she said, “because it implies no humans. But technology is created by humans. You evolve it over time with a person and a computer.”
Read the article on Forbes here.