Like living things, technologies evolve to better suit the needs of their environment—in this case, the security intelligence space. Others have made this point but there is an evolution from rules based approaches to analytic approaches (e.g. SIEM to UEBA).
Technology solutions require the work of experts like product managers, engineers and data scientists to evolve. What they don’t do is improve automatically from the user feedback. The real trick in Security Intelligence is to get the supervised learning models to update automatically, without the need for a data scientist. Pull this off, and you have built a system that evolves on its own—a system that learns.
This concept—a learning software system—seems so incredible that one’s mind tends to skip over the implications, because those implications force us out of our usual paradigm. It is much easier to stay in the old model of incremental improvements—adding new tools that can help you reduce alerts, or improve detection in a specific area. Much harder to grasp is an AI platform that can function as a human analyst—it can handle multiple use cases, it can analyze all your data with the same inductive skill as your human analysts, and it can be tailored to your unique set of applications, networks and policies. And it does all this through learning.
Adopting an AI platform and letting it learn is akin to hiring a bunch of new analysts and then teaching and training them to be level 2 or level 3 analysts. The benefits are not trivial: a learning AI system will be able to continuously adapt to new attacks, spot things you’ve never seen before, and point out things you’ve never thought of.
Soon, other systems will be like extinct organisms—artifacts of a past world, outcompeted by faster, stronger and better species. Machine learning (ML) systems that do not automatically update their Supervised Learning models might start off reducing false positives and false negatives compared with signatures and rules, but over time the attacks will change, the infrastructure of the enterprise will change and the company policies will change. The efficacy of those ML models will decay and require someone to improve them in the context of the new attack environment-- and you are back to square one.