Threat Management

How Critical is Dwell Time to Cyber Security?

The quicker a data breach is detected, the quicker it can be resolved, and the less damage is dealt. So, how do you reduce the time to detection?

Oliver Pinson-Roxburgh Headshot

Oliver Pinson-Roxburgh CEO & Co-founder

28/06/2018 10 min read

Introduction

If many of the recent threat reports are to be believed, we can assume that, on the whole, businesses are not improving when it comes to detecting a network breach. In those isolated cases where improvement can be seen, the improvement is small. The Mandiant M-Trends 2018 report states that the median global dwell time sat at 101 days (in 2017). I can believe that. In my many years of experience in the IT security industry, I’ve learned that an organisation’s ability to detect a breach has always been a problem and it’s a difficult one to resolve. It’s difficult because there are so many factors in play.

The factors businesses struggle with the most range from not knowing what information to collect, to what tools and skills should be used to monitor the environment. Another, perhaps bigger challenge lies in training the people that will conduct that monitoring. How do you keep them engaged and aware of the latest threats?

Dwell time is, in my opinion, one of the main concerns for today’s businesses. Before I get stuck into providing an answer for the above challenges, it’s probably best that I explain what dwell time actually is. Put simply, dwell time is the time it takes for an organisation to become aware of a breach. The longer this period is, the more damaging it’s likely to be.

Share this Article

Trust the reports?

I always take threat reports with a pinch of salt, as they are all too often biased or flawed. How they get their data is always worth considering, as this can skew the results one way or another. For example, if a company relies on endpoint data, then they are not likely to see a great representation of application attack data, which help detect threats a lot earlier. Reports based on forensic results will again, give you different conclusions, as forensic investigations are often instigated because of an initially undetected breach to begin with.

This is not to say that these reports are wrong or of little use. Often, they are all saying similar things, albeit with different data points. For the most part, you can take away one or two related points from all them, the most important of which is that time is of the essence.


Internal versus external

Reports across the industry fail to agree on what is more effective when it comes to detecting a breach, internal or external teams? In my experience, internal teams are not usually the first to detect serious breaches. More often than not, it’s an external source that notices something is amiss, such as an acquiring bank, law enforcement or even customers making contact to report something suspicious.

In some ways, it’s difficult to provide a definitive answer as to what option is best. For starters, the customer base, industry and established infrastructure a company has or sits within can influence it one way or the other. However, let’s address a more important question. Are businesses finding out about breaches too late?

Telecommunications company Verizon released a breach report stating that 60% of breaches take months to detect. I can believe this. After all, if it’s an attacker’s intention to infiltrate a network and make off with sensitive information, they’re hardly likely to want to draw attention to themselves. This will often mean that their methods of approach will opt for stealth rather than overt actions. If nothing seems obviously amiss, an internal team is unlikely to waste time actively looking for something. Often, it’s only once an attacker chooses to do something with any information acquired or systems compromised, affecting servers, partners or worse, customers, will a breach become known. Because of this, the industry is starting to see a significant difference in down time between those notified of breaches by internal teams and those notified by external ones.


Amount of data

There is so much data being exchanged, controlled and processed now that a breach can be extremely damaging. What’s more, legislation such as GDPR is putting a greater emphasis on the need to respond to breaches appropriately and efficiently. In order to meet compliance and show business partners and customers that they are reliable, businesses need to prove they have a robust security system. If it takes them several months to discover a breach, any claims that they have such will appear weak, not to mention the fact that their customers may have already suffered irreversible damage.

So, organisations need to be better at detecting a breach and do so faster than an outside source, such as a customer or a regulator. They need to look proactive and stable. In that case, what is an acceptable timeframe for detecting a breach? Well, if you want to avoid any serious damage, be it financial or reputational, then ideally it should take no longer than 24 hours. If you think that’s too strict a time frame, just consider what an attacker can realistically achieve in just one day once they’ve breached a network.

The quicker a breach is detected, the quicker it can be resolved, and the less damage is dealt. So, how do you reduce the time to detection? Well, to work that out we’ll need to know a hacker’s method.


Recon

First, the attacker must find a target. This is the first part of the recon phase. For opportunistic attackers, this could be as simple as running a specially crafted Google search (called a Googledork... yes, really) or checking other more malicious online search engines. Once an attacker has identified a valid and worthwhile target, they move on to the recon stage. This involves finding pieces of information that make it possible for them to build their attack. Much of this information will come from public sources. You’ll be surprised how much information they get from the target’s own website. Once the attacker has a full understanding of the potential attack surface, the next step is to identify vulnerabilities that can be exploited as part of an initial exploitation.

Of course, there is no way to detect anything suspicious at this stage. However, it’s interesting to note that some attackers have based their approach off a seemingly innocuous image found on a company’s website, such as a photo of staff wearing their passes.


Initial exploitation

This is often the first time you’ll have the chance to detect an attack. At this stage, the tools you are running in front of the systems you are serving to the internet’s logging level will be critical. The tools you employ and the logs you choose to monitor will ultimately impact how and when you’ll detect a breach. Of course, this will vary depending on a company’s set up or goal. For example, if it’s a web-based application, then there’s likely to be an application firewall in place to pick up on malicious activity or anything beyond the norm. An attacker will want to get around these whilst creating as little noise as possible.

Often, a close monitoring of the logs will be able to detect a probing of the network. Unknown IP addresses attempting to make contact can be flagged, these can be investigated and, if the offending IPs are known to be malicious, they can be blocked.


Tooling up

The next step for a malicious actor would be to drop some sort of payload. This could be in the form of a php file or a reverse shell or practically any form of malware. Detecting this kind of activity would rely on monitoring logs for any new files in areas in which they are not expected. Changes to any environments should develop logs and, if they’re not expected (i.e. covered under an official change request), these should be flagged up immediately and trigger further investigation.

These can also be sent in the form of email attachments. Fortunately, it is possible to block certain filetypes from being sent/received, though this is not without it’s drawbacks. Similarly, outbound comms can be an area of interest, especially in the case of a reverse shell and C2, where a new and unexpected outbound port can be observed. Even today, it’s not uncommon to find environments with unrestricted outbound access, particularly in the cloud, which makes an attacker’s job much easier. With that in mind, monitoring outbound coms is just as important as analysing what’s coming into the network.


Lateral movement

Attackers only need a foothold on the network in order to move up into more secure areas to get their hands on critical data. With the recent addition of internet of things (IoT) devices, many businesses are unwittingly adding extra doorways or windows into their network.

Because of this, being aware of normal behaviour in your environment is crucial to detecting a possible attack. Of course, this means spending a period of time with your threat monitoring service getting to know your network and defining what is normal. With new additions to a network, this can be redefined to reduce the instances of false positives. This is known as thresholding and is an integral part to threat monitoring and managed SIEM.

With so many ways for an attacker to infiltrate a business, it’s imperative you know what constitutes as ‘normal’ behaviour and more importantly, what is not. If a server in your environment starts to communicate via an excessive number of ports across a wide range of IP addresses, this should raise alarm bells and your monitoring system should pick this up.

Remember, an attacker will always choose the path of least resistance. Should they find an easy target, they’ll exploit this to pivot onto other parts of your network.


The wind of change

As security systems and tactics evolve, so does the approach of the hacker. In my experience, they are more regularly opting for a stealthier approach. The longer they can go undetected, the longer they have to capitalise from their access. One rising trend in cybercrime is cryptojacking, wherein an attacker installs malware on a compromised target to syphon of CPU power to mine for crypto currencies. This is likely to become more prevalent as it’s easier to monetise than ransomware or DDoS attacks.


Tech vs attackers

Whether or not attackers are more advanced or are innovating faster than security tech is an interesting discussion to have. However, from what I have seen suggests this is not the case. Hackers often use old techniques and publicly available software and will rarely attempt to breach an organisation with a robust security system as it’s not worth the time and effort required. A lot of their success comes from exploiting poorly configured devices connected to the network, or via social engineering, the latter of which is a problem that will always remain.

This method of attack will tend to rely on the software being installed somehow. Often, this will involve tricking an employee into unwittingly installing the malware, though there are some methods that allow for background installs to be initiated. Either way, a well-managed SIEM system monitored by a skilled SOC team will be able to discover and isolate these installs before any real damage is done.


Detect or block?

In a perfect world, we’d be able to block every breach attempt. However, due to a variety of factors that’s simply not going to be possible. Whilst we can successfully block a number of attacks, some are always going to slip through the net. This means early detection is vital in order to isolate and eliminate the threat before any real damage is done.

The solution relies on a number of key components, a lot of which are in short supply. First and foremost, you’ll need a team of talented security experts dedicated to threat monitoring. These experts will be crucial in reducing dwell time. Monitoring and responding to all the events across all attack vectors is a full-time job in many cases, meaning most companies have to make a hard choice. Do they invest large amounts of money into installing an in-house SIEM complete with trained SOC analysts? Not only is this costly but it will also demand a lot of time when it comes to developing a full understanding of the systems. Do they carry on as they are and accept that this will inevitably lead to serious dwell time? Whilst this may seemingly save money, the increased risks could be extremely damaging.


The best of both worlds

Outsourced managed SIEM is becoming increasingly popular. Companies provide a ready-made SOC complete with trained analysts at a fraction of the cost of doing so in-house. Thresholding and the logs and environments monitored can be customised to suit any business need. Everything can be changed and tweaked on an on-going basis until you have a package that suits you.

Outsourcing is a cheaper option and also saves time: a company doesn’t have to advertise for new positions and then interview for them or even provide any sort of training. Everything is already in place.

All this should be provided along with a simple and easy to use service that will offer an explanation of what occurred, what was done and the final result in simple terms. For many, implementing this in-house is a struggle or a serious drain on resources (time, money and staff). If this is the case, you may benefit from outsourcing this responsibility. If you’re wondering if this option is for you ask yourself:

  • Are you collecting the right data to best detect a breach?
  • Are you able to keep your chosen platform running and continually improve your abilities to detect against a range of evolving threats?
  • Do you have the right staff with the right skillsets?
  • Are you able to tune and improve the AI engines?
  • Can you support 24/7 detection?

Summing up

In short, dwell time can be significantly reduced with the right approach to threat monitoring. However, most organisations are too busy trying to manage and grow their business to be able to focus on installing the relevant teams and equipment to combat an ever present cyber threat. Unfortunately, it’s become too big a problem to ignore.

Whatever you choose, be it an in house model or outsourced, if you don’t have the right people with the right skills monitoring the right events then you are increasing dwell time and putting your business at extreme risk. Dwell time is one of the most pressing cyber issues for any business and it’s important it’s kept down to a minimum.

Oliver Pinson-Roxburgh Headshot

Meet the author

Oliver Pinson-Roxburgh CEO & Co-founder

Information security wizard, evangelist, and guru – not to mention co-founder of Bulletproof. Oli’s always sharing deeply interesting and insightful things on this blog and on his LinkedIn. With many years’ of experience in understanding information security and innovation, Oli’s blogs are always a highlight.

10 Steps to Cyber Security

Find out how to secure your business in 10 steps with our free best practice infographic.

Download now

Related resources


Trusted cyber security & compliance services from a certified provider


Get a quote today

If you are interested in our services, get a free, no obligation quote today by filling out the form below.

(1,500 characters limit)

For more information about how we collect, process and retain your personal data, please see our privacy policy.