Artificial Reality | The PatternEx Blog

Collections of the thoughts and the people behind the PatternEx Virtual Analyst Platform powered by AI2.

InfoSec: Cut Your False Positives to Zero?

Reducing False Positives

A question we’re often asked is whether PatternEx can reduce false positives, and if so, by how much?

It’s an interesting question. My flip answer is: Sure, we can reduce false positives to zero. We just won’t show anything. But, then you’ll of course miss everything. So here’s the more serious answer:

Cut_False_Positives_blog_image.jpg

 

Measuring Threat Detection Efficacy

When building Artificial Intelligence systems for InfoSec, one of the key challenges is how to measure efficacy of such systems so we can have productive discussions around performance.

Consider a scenario where there are 10 attacks per 1 Million events - a typical InfoSec scenario. In this case, it’s usually not productive to focus on just reducing false positives. We should also look at how the system is performing in terms of detecting those attacks. The right metric to measure detection is recall (or true positive rate), which measures how many attacks we detected among all the attacks that happened.

But, just focusing on recall alone does not give an accurate picture either. We can increase recall to 100% if we decide to alert on every event. But the cost in man hours to evaluate all those alerts would go through the roof.

The Right Metric: Pattern Detection Ratio

So the right metric has to take into account both the reward (detected attacks) and the cost (total number of alerts to investigate).

For example, consider a system that shows 2 out of 10 attacks in 100 alerts compared with another system that shows 3 out of 10 attacks over 100 alerts. Which one is better? Second one looks better, but what if we raise the number of alerts? Let’s say at 200 alerts, the first system detects 5 attacks while the second detects 4. Now the first system looks better. So, it’s harder to conclude from looking at a single data point.

To draw accurate comparison between these systems, we have to systematically measure recall at various alert levels and plot them. The system with higher area under the curve is a better system. Area could be computed using a simple trapezoidal rule.

For example consider these two systems:Cut_False_Positives_blog_table.jpgUsing the Trapezoidal rule, Area under the curve for the first series is 205. Below is calculation of that area:

(0 + 2*0.2 + 2*0.4 + 2*0.5 + 2*0.6 + 0.7) / ((500 - 0)/(2*5))

Using the same method the area under the curve for the second series is 215. So, the second system is better.

At PatternEx, we use a ratio of the area under the curve to the maximum area (500 in the above case) to measure efficacy. We call this ratio - Pattern Detection Ratio (PDR). When PDR is closer to 1, it implies that you are detecting all the attacks with very few false positives while a PDR closer to 0 implies that you are not detecting anything and your false positives are high. This helps us ground our conversations to objective metrics and measure ROI effectively.

So, what’s your PDR?

Topics: False Positives Artificial Intelligence Industry Insights

Subscribe Now