Artificial Reality | The PatternEx Blog

Attacks in the Abstract: Detecting New Attacks with AI

Consider Funshion malware. Sometimes classified as "aggressive malware," the base code is over four years old and is still bypassing endpoint protections. Funshion makes minor modifications to itself, rendering it invisible to the rules or signatures designed to catch it. Today there are well over a dozen variants in the wild, each designed to beat static rules. Each variant is essentially a new attack that rules cannot stop.

The good news is that AI has been able to do what rules cannot: understand that subtle variations of malware are still malware. This means AI can detect known attacks as well as attacks it has never seen before. This distinction alone puts it well beyond the capabilities of rules. So how does it work?

Topics: Artificial Intelligence Funshion Abstractions