Organizations are deluged with billions of security events every day, far too many for human analysts to cope with. But security analysts have a powerful ally in their corner: machine learning is tipping the advantage toward defenders.
Machine learning (ML) is changing the approach of organizations to threat detection and how they adapt and adopt cybersecurity processes. The idea is not just to identify and prevent threats, but to mitigate them as well.
ML has the power to comprehend threats in real time, to understand the infrastructure of a company and its network design and attack vectors, and to protect and defend it with human talent and machine power. The algorithm—the machine—is capable of the unthinkable when it comes to data mining, data crunching, and data correlation, since it does what it does best tirelessly, without complaint, and without having a bad day.
Machines do not rest. They don't require sleep. They do their job in a persistent way. Humans make mistakes. Algorithms are never wrong. They always do what they were designed to do. Machines can also manage many information sources and not only correlate, but super-correlate information about millions, billions, even trillions of events in a day. No cyber analyst could deal with that.
From a threat-detection perspective, ML is a game-changer. Here's why.
[ Get up to speed with TechBeacon's guide to a Modern Security Operations Center. Plus: Learn how to defend against insider threats with Interset and CrowdStrike. ]
Reducing false positives
An algorithm can learn from its mistakes on the fly. In that way, it embodies the Japanese idea of Kaizen, or continuous improvement. This allows it to always be on its A game. It's always the best version of itself because it's always improving its game.
A good ML discipline is one that can "see" patterns of behavior, guessing the form of an attack and how to fight back. The algorithm can be trained with different types of attacks, can learn the methods to gain privileged access and lateral movements, and can even adapt in real time to a situation. An excellent ML approach can learn from false positives.
False positives will always exist, but they're reduced with each interaction with an algorithm because the machine is continuously learning. After implementing an ML system, false positives can be reduced by 50% to 90%.
While ML decreases false positives, it can increase the speed at which threats are detected. That can dramatically shrink the window of compromise for a system.
Most of the elements in the NIST cybersecurity framework—detect, protect, respond, identify, and recover—can be accelerated via ML.
Not only can ML detect threats quickly, but it can detect 90% or better of all known and unknown threats with unsupervised and reinforced learning.
Closing the door on threat actors
Although ML can't predict future attacks, it's very good at predicting the next move by an adversary once an attack is detected. That allows it to quickly close the door on an intrusion. If a resource opens up a connection to a known malicious IP address, for example, ML can recognize that and automatically shut it down before any data is exfiltrated.
That's because what an attacker is going to do after penetrating a system is largely known. The Mitre Corporation, for instance, has developed a comprehensive list, which is constantly updated, of adversarial behaviors based on real-world observation. Called the Mitre ATT&CK, it's a comprehensive representation of behaviors that attackers employ when compromising networks.
That's why in the chess game between adversary and defender, once an attacker makes a move, all the outcomes from that move can be determined through ML and flagged or blocked.
[ Learn how to practice zero trust security with TechBeacon's guide. Plus: Join top experts in this July 7 Webinar to learn how get to zero trust access control with low friction. ]
Malicious actors discover ML
Still, cyber criminals aren't stupid. They realize that they too can use ML to automate their attacks and eliminate most human intervention. They can write an algorithm, train it with a pattern of attack, and, while the machine is running its sorties, can kick back with a martini by the pool.
That's why defenders need to use ML at every attack vector—at the gateways, at the endpoints, in the cloud—because if there's a gap in a system's defenses, an adversary's ML algorithm will find it.
The new cyber criminal isn't some kid in a dark basement with a computer. It's often a criminal group that's using ML to launch large-scale attacks on thousands of companies at the click of a virtual button.
Those attacks can be highly selective, too. For example, in the code of the Petya/NotPetya virus, which disrupted activities at large companies throughout the United States and Europe in June 2017, there were instructions that if certain antivirus programs or other security measures were in place on a system, it should be ignored and the virus should move on to a new system.
After all, when you're attacking millions of systems, why waste time on those that take security seriously?
Human-machine collaboration
As good as machines are at identifying and mitigating threats, they'll never be able to do 100% of the job. That's the stuff of science fiction.
You need human analysts to confirm some actions, make final decisions, and identify exceptions. But with one million cybersecurity jobs globally needing filling, there aren't enough analysts to go around.
The large majority of tasks security analysts are being saddled with now is triage work—sorting through threats to find those that need further scrutiny. Fortunately, that kind of work can be done with ML in an effective and efficient way, freeing up analysts' time to address serious threats.
When humans collaborate with machines on cybersecurity, they'll find that ML can help them a great deal.
[ Learn how to supercharge your behavioral analytics with CrowdStrike EDR in this Webinar. Plus: Get the State of SecOps Report. ]
"machine" - Google News
July 06, 2020 at 07:13PM
https://ift.tt/3iylnpC
AI and security: Machine learning is a threat detection game-changer - TechBeacon
"machine" - Google News
https://ift.tt/2VUJ7uS
https://ift.tt/2SvsFPt
Bagikan Berita Ini
0 Response to "AI and security: Machine learning is a threat detection game-changer - TechBeacon"
Post a Comment