Detecting threats, especially new/unknown ones, will always be an imperfect science. That said, it appears the latest generation of threat detection technologies has been doing a relatively good job. Depending on your perspective, one might even say they’ve been working too well. Let me explain.
The scenario I’ve been encountering a lot recently is NOT one where an organization’s security infrastructure failed to identify malicious or unauthorized activity; rather the failure has been in not responding to the related detection events in a timely manner (or even at all).
To be clear, the ambiguous nature of much of the event data being generated remains a significant problem. Discerning actual threats from noise and then prioritizing them is still not as easy as it needs to be.
However, even when new threats are clearly identified and classified – either directly by the security infrastructure or as a result of a manual investigation – security teams still have to respond to them. The challenge for many organizations is that a combination of technical limitations (read: lack of integration) and cultural/operational biases (read: fear of disrupting the business) means that re-configuring and updating the security infrastructure to counteract new threats remains a mostly manual, often disjointed, process. Add to this the rising volume of new threats – with AV-Test.org indicating that 83 million pieces of new malware were discovered in 2013 alone – and it comes as no surprise that organizations are starting to fall prey to a mounting backlog of un-processed threat intelligence.
To overcome the technical obstacle involved here, security teams need to place greater emphasis on establishing integration and intelligent coordination among component-level controls. Overcoming the cultural bias against automated mitigation may not be as easy, but it’s hard to see where today’s enterprises have much of a choice. If steps aren’t taken to progressively increase the efficiency of processing and responding to new threat intelligence, then the rate of embarrassing and costly incidents that could actually have been prevented, or at least stopped sooner, will only continue to climb.
Maybe it’s just me, but it certainly seems preferable to take the heat for the occasional service disruption stemming from over-zealous mitigation efforts than to get stuck explaining how a failure to act on available information resulted in a major security breach and the exposure of sensitive customer data. What do you think?
Share this Post