Video: Stop the Breach or Spot the Breach?
A joint CIS & NNT Event
Join NNT in this educational session, as we partner directly with the Center for Internet Security (CIS) and Blake Frantz to address the topic of 'Why Security Automation is a critical asset to have'.
Thanks to Blake for the opening section - I'm Mark Kedgley, CTO, New Net Technologies - Stop the Breach or Spot the Breach is a good play on words and nicely summarizes what we believe to be the only realistic and practical approach to take to Information Security, particularly as the whole world is over pre-occupied with trying to Stop the Breach with too little focus on contingency planning for when a breach is successful.
As Blake has covered, Hardening is essential and is still the first step that any organization needs to take in order to Stop the Breach. It is a complex area but one that lends itself well to automation and we will illustrate this later. NNT have a unique approach though that provides the best that the traditional vulnerability scanner solution offers while addressing its known flaws and shortcomings, more later.
NNT are one of a handful of CIS Certified Vendors and one of only two that have adopted a non-stop, continuous approach to system hardening or compliance management which gives a number of key benefits to enabling a secure infrastructure.
So what does the ultimate hardened system look like?
The ultimate hardened system - remove from the network, switch off and lock away in a safe.
The battleground for hardening is fought over between the need for security and the need for ease of operation. The requirements for open, user accessibility to applications is completely at odds with the need to shut out hackers and malware. Hardening measures need to strike a balance and where the line is drawn determines ultimately how secure a system will be.
We talk a lot about the need for a layered approach to security to and this is as true as ever - making use of a blended mix of security defences is essential, although system hardening remains the most valuable of all steps that any organization can take for security.
As challenging as the initial derivation of a hardened build standard to fit your organization, applications and working practices is (although some of Blake's free Benchmark Checklists will help you get started) this is nothing compared to maintaining it.
Configuration Drift is the arch enemy of system hardening and it is a fact of life - over time, configuration settings will be changed, and almost always in favour of ease of use/ease of maintenance/access - disabling UAC or a host-based firewall for instance.
Mistakes will also be made during planned changes introducing unexpected security weaknesses or even just affecting service delivery.
Then there are the vulnerabilities not-yet-known. Shellshock and Heartbleed vulnerabilities have existed for years and while it is impossible to know whether these were exploited prior to their public disclosure and subsequent patching, now that they have been exposed we now have critical vulnerabilities that we never knew existed until a few weeks ago - the hackers definitely know about them too so remediation is urgent via patching. This is a daily moving of the goalposts - you are only ever as secure as the most recent patch release.
And finally the bad guys really are out to get you - the breaches at Target and Home Depot will end up costing $100Ms and enormous disruption. Any slight gap in security procedures and practices can and will be exploited - if a 3rd party has access to your systems, make sure they have their passwords updated regularly and that they have access on a least privilege basis. But the conscious, direct Insider Threat is also real and by definition almost impossible to ddefend against - the ultimate conflict of security versus need for access.
In summary, there is never going to be a 100% guarantee of security.
You can actually separate the issues raised into two camps - Change Control summarizes one issue with Breach Detection being the other heading. However, whilst the drivers/issues are quite different in origin and intent, they can both be tackled in the same way.
If we want to maintain security and either Stop the Breach, or at least Spot the Breach, we are going to need significantly better visibility of what is going on in our IT estate.
Taking Target as an example, here was a breach that went undetected for 2 and a half weeks, resulting in the loss of personal information for over 70 million customers and over 40 million payment card numbers with estimated clean-up costs running into hundreds of millions of dollars. In fact just this morning I heard on the news that Target will be offering free home delivery for Christmas because, as their CEO put it, they are still trying to repair the damage done last year over Thanksgiving.
The breach could and should have been detected - the analysis that has been done after the event shows that the anatomy of the trojan attack used leaves plenty of clues - new system files, new services created, registry keys and values created.
Slide 9So why was it allowed to do its work unnoticed? Target are like most organizations - they are using outdated scanning approaches to breach detection plus there is too much noise, too much change activity for them to conduct any sensible analysis.
Taking the first point - the outdated approach to breach detection.
Traditionally a vulnerability scanner is used to cover the config drift detection but has always failed as a breach detection mechanism. Inspecting the entire filesystem of a host and comparing to previous baselines makes the process cumbersome and slow, which in turn makes this is process that can only be used sparingly, at best, once a month. Going back to Target, they were breached for just 2 and a half weeks and suffered the damage thy did, so it is clear that monthly scanning is not good enough for breach detection.
The NNT approach actually unifies the issues and tackles them with one solution. As we have said, whilst the Change Control issues could be addressed using a vulnerability scanner, the Breach Detection cannot be provided using periodic but ultimately infrequent scans - scanning is just too inefficient, too resource intensive and will never be the real-time breach detection solution needed.
NNT's approach is different and superior in that we provide a real-time, continuous File Integrity Monitoring capability. Breach activity can be detected and alerted within seconds of incidence.
We can do this because, unlike the scanner, we run a one-time baseline of all files, registry settings, installed software, running process and services, user accounts, security and audit policy - all the attributes that will reflect breach activity. We then just track changes - no changes, no resource used. It means we get continuous, real-time breach detection without the resource overhead and stop-start operation of the scanner. We'll demo this later.
The second issue raised when discussing Target was that they suffered from too much noise, too much change activity for them to conduct any sensible analysis.
By way of example….I know you may be thinking that he doesn't look like a hacker - but can anyone really tell you what a hacker does look like?
To explain the challenge of noise when operating breach detection…this is a typical afternoon on Naples beach - we have an office which is just a few minutes from the beach and its always tempting to take a slow lunch break for some of our team based there. This is the view from the NNT Naples office window and it should allow us to see any of our sales or support guys sneaking off for some extra sunshine.
(by way of contrast - this is the view from my office in London)
Back to Naples - If I am trying to see if any of our guys are wasting time at the beach I have to look pretty long and hard because there are too many other people on the beach, and too much other stuff going on.
Similarly with Breach Detection - I have to have a much more controlled view of what constitutes regular activity within my estate before I can establish what is unusual or irregular. Likewise my Planned Changes - I need to have a clear understanding of when and what my planned changes look like in order to factor them out and therefore highlight the unusual, irregular activity which is potentially the result of a breach.
We use the term Improvement-Based Vulnerability Management because this idea that you monitor at a highly detailed level in order to detect breach activity inevitably means that you when you first start you will become aware of all sorts of activity you never knew was going on before, but once you understand that this is actually acceptable, legitimate behaviour you can improve your File Integrity Monitoring policy and in doing so, you increase the focus you have on revealing the unsual and irregular (and potential breach) activities.
If I can do so, I get a much clearer picture of breaches when they happen, allowing me to take care of them. Get back to work!
How do we apply this approach to Target or any other major IT estate? To start with we need to harden systems and restrict access and the scope for changes to be made, both to restrict the opportunities for a would-be attacker by removing vulnerabilities, but to also exert more change control over our internal team.
A static environment is one that remains secure and breach free, but also one that makes it easy to highlight breach activity. Reducing the background noise enhances visibility on unwanted changes.
But every organization relies on agile IT systems that can be updated and improved, and usually the sooner the improvements are made the better. Nothing stands still for long, so the idea of a static, untouched infrastructure is never going to happen.
So what the IT Security team need - peace and tranquillity so that breaches can be spotted - doesn't square with what the development and operations teams need.
Change Control is the compromise - make changes, but in a controlled fashion.
One change that is unavoidable is Patch Management - this really is the love/hate, cant live without it, cant live with it, IT necessity that both improves security and undermines it. Patches are unavoidable and never-ending - but don't worry, we can even handle these automatically and intelligently, even for complex patch deployments across large estates.
Slide 17How we handle patches is as follows - If you were to examine a patch deployment using something like Change Tracker, we will detect and report the full details of the changes identified. So we will see file changes and software update changes. Change Tracker is performing the forensic detail that is needed.
As a human being I can inspect the details of the changes reported and provided I know what I am expecting from my patches, I can safely approve these changes as legitimate patch activity. This is great because if I then see an unexpected change I can do something about it - so if I am Target, I see the winxml.dll file, know it isn't a patch I have deployed and kill it before it steal data.
Again, leveraging automation within Change Tracker, I don't need to manually decide which changes constitute the patch, I can just auto-learn the pattern of changes directly from a donor machine. This means I can even apply forensic analysis down to the hash value of files to ensure there are no Trojans muscling in on a planned patch deployment.
This means that even in a large estate, as patches are deployed, Change Tracker records the activity and automatically reviews them against any active Planned Change Templates.
The beauty is that what is then left are just the few unexpected, unaccounted for changes that were maybe emergency changes or in fact as a result of breach activity.
Now I can patch as required but also get to operate the forensic breach detection necessary, all without any of the hassle and time-consuming manual analysis I would otherwise need to make.
So in summary, information security requires systems to be hardened and to stay hardened. Modern IT environments don't conform to Security Best Practices - lots of changes being made, not always in the best interest of maintaining security. Even in a well-run and secure estate, breaches can still happen through phishing,zero day malware and insider attacks, so we need to implement breach detection capabilities.
NNT real-time FIM takes the best of the traditional vulnerability scanner but addresses the known flaws to provide a real-time configuration drift and breach detection capability that can operate continuously. Best of all, with the Closed-Loop Intelligent Change Control operation outlined, this can even cope with the noise generated by patching and other planned changes, automatically analysing change activity to isolate just the truly exceptional, unexpected events, including security breaches.
Thank you for your time - I will run a quick demo to demonstrate some of the concepts outlined but if you need to drop off the call at this stage no problem - please ask us to run a personalised demo at a later date. Also if you have any questions for Blake or myself please use the feedback form at the end of the webinar.
Thanks to Blake for the opening section - I'm Mark Kedgley, CTO, New Net Technologies - Stop the Breach or Spot the Breach is a good play on words and nicely summarizes what we believe to be the only realistic and practical approach to take to Information Security.
During the Webinar, you'll be hearing directly from both Blake and Mark as we discuss the importance of using best practices for secure configurations and why developing an information security strategy is key - prevention is still better than cure.
Breaches such as those at Target & Home Depot could have been mitigated by taking some fairly simple Steps: Start with the implementation of a hardened build standard with precision change detection (Such as the CIS Benchmarks which are referenced as a source in the PCI DSS) and this, coupled with breach detection technology (FIM - based Host Intrusion Detection system or HIDS), will ensure that, even if a breach is successful, at least you will be alerted to the fact immediately and be in a position to take action to prevent any confidential data loss.
Remember - Target lost data affecting over 70M individuals in just two and a half weeks, so where a breach can't be prevented, speed of detection is critical!
Sounds interesting? This session is designed to help you review what the real priorities are for maintaining a high secure environment without overburdening you or your organization with unnecessary expense and work.