We don't need more InfoSec analysts to write rules and investigate rules. We need more InfoSec analysts to train Artificial Infrastructures to detect attacks.
By now you’ve heard the claim that there is a talent gap in InfoSec. Various sources claim that the excess demand for security professionals exceeds one million headcount. Their argument is basically this: attacks are not being detected quickly or often enough, and the tools we use are generating too many alarms. So we need more people to investigate those alarms. Makes sense, right?
Even if we miraculously hired a million infosec professionals tomorrow and dropped them into every company around the globe, there would be no change in detection effectiveness and we would still have a “talent gap.”
In order to explain why, we need to take a step back.
How do we classify a person as a criminal or not in the real world? By their actions. We observe their behavior and may apply context, nuance, and intuition to decide if they are a criminal or not.
In cyberspace we do the same thing, but it is much more difficult. Current infosec solutions are rules-based. Humans set rules to detect attacks. But capturing context, nuance, and intuition using if-then rules is an impossible task.
Writing more rules, or constantly tuning older rules isn’t the answer. Rules themselves are the problem. Rules fail to detect new attacks and spew out more alerts than your team can handle. More analysts will simply generate more rules which will generate more alerts... requiring more analysts.
The good news is that we can do a much better job of approximating the power of the human mind using Artificial Intelligence models. Certain models called “supervised models” are able to mimic context, nuance, and intuition by forming abstractions of behaviors.
To a supervised model, a “behavior” is the aggregate of all the logged information about an entity. For example, things like: packets sent, packets received, length of connection, periodicity of connections, and so on. There are hundreds of these actions that, in total, describe the behavior of an entity.
Once the behavior is modeled, it must be classified as either “malicious” or “benign.” Only a human can do this initially. Only a human knows the company's risk policies. Only a human, using context, nuance, and intuition can classify a behavior as an attack or not. The human reviews the behavioral pattern and classifies it as an attack or not.
That classification step is called “labeling,” and when that label is attached to a behavior, the supervised learning model forms an abstraction of a certain attack. This abstraction is a series of statistical distributions of hundreds of behaviors that, in aggregate, form a model of the attack pattern. The supervised learning model “learns” the intuition, context, and nuance from the human label, and then applies it to ALL behavior patterns across the entire enterprise.
The power of this abstraction is not only the ability to detect all attacks matching the pattern, but to also recognize day zero attacks that are similar to this attack.
Man and machine together. Fighting crime!
Given the constant changes involved in attack detection, humans will always be needed. Within your company, risk policies change overnight. M&A happens. Your infrastructure changes. Or your company decides to add mobile as a distribution channel. Meanwhile attackers change the type and volume of attacks. This is too dynamic a reality for static rules to be effective. The one entity that can figure out which behaviors are malicious and which are benign-- given your current risk profile-- is the InfoSec analyst. However the analyst needs an AI infrastructure to not only capture his/her context, nuance, and intuition, but to also scale that across the entire enterprise. In real time.
To be clear, the humans are still in high demand - there aren’t enough to train AI systems.
That’s the true “talent gap”