Author Q&A: Here’s why the good guys must continually test the limitations of ‘EDR’

By Byron V. Acohido

A new tier of overlapping, interoperable, highly automated security platforms must, over the next decade, replace the legacy, on-premise systems that enterprises spent multiple kings’ fortunes building up over the past 25 years.

Related: How ‘XDR’ defeats silos

Now along comes a new book, Evading EDR: The Definitive Guide for Defeating Endpoint Detection Systems, by a red team expert, Matt Hand, that drills down a premier legacy security system that is in the midst of this transition: endpoint detection and response, EDR.

Emerging from traditional antivirus and endpoint protection platforms, EDR rose to the fore in the mid-2010s to improve upon the continuous monitoring of servers, desktops, laptops and mobile devices and put security teams in a better position to mitigate advanced threats, such as APTs and zero-day vulnerabilities.

Today, EDR is relied upon to detect and respond to phishing, account takeovers, BEC attacks, business logic hacks, ransomware campaigns and DDoS bombardments across an organization’s environment. It’s a key tool that security teams rely upon to read the tea leaves and carry out triage, that is, make sense of the oceans of telemetry ingested by SIEMs and thus get to a position where they can more wisely fine-tune their organization’s automated vs manual responses.

Last Watchdog visited with Hand to get his perspective of what it’s like in the trenches, deep inside the world of managing EDRs, on the front lines of non-stop cyber attacks and reactive defensive tactics. He says he wrote Evading EDR to help experienced and up-and-coming security analysts grasp every nuance of how EDR systems work, from a vendor-agnostic perspective, and thus get the most from them. His guidance also happens to shed some revealing light about the ground floor of the cyber arms race while illustrating why network security needs to be overhauled.

LW: From a macro level, do security teams truly understand their EDRs? How much are they getting out of them at this moment; how much potential would you say is actually being tapped vs. left on the table?

Hand:   I don’t think that a majority of teams who rely on EDR truly understand their inner workings or are getting the most out of them. EDRs have historically been considered a “black box” – something that activity goes into, and alerts come out of. Most teams that I’ve encountered trust that their EDR works perfectly out of the box and unfortunately that’s just not the case.

Every EDR needs to be tuned to the specific environment in which it is deployed. Some vendors have a period during customer onboarding wherein the EDR observes what is typical in the environment and creates a baseline, but this shouldn’t be the end of tuning. The next step should be building custom detections tailored to the organization. Unfortunately, most SOCs are still understaffed so detection engineering often goes on the back burner in favor of managing the alert queue.

LW: Your chapter teasers suggest there remains a ton of viable attack paths in the nooks and crannies of Windows systems; is this where attackers are making hay with Living off the Land (LotL) tactics? Can you please frame what this looks like.

Hand:   In any significantly complex system, there will inevitably be edge and corner cases that we just can’t account for. Windows is a very complex operating system and there are a ton of native capabilities that attackers can leverage. This can include using traditional living-off-the-land binaries or something as niche as a Win32 API function that allows for arbitrary code to be executed.

Finding and closing all of these attack vectors is an immense, if not entirely unfeasible, task. This fact highlights the importance of growing beyond solely using brittle, signature-based detections and investing in robust detections that capture the common denominator between many techniques and operations that an attacker can employ. This is only a band aid though and we should be looking at Microsoft and other OS developers to invest more into secure-by-design principles.LW: Your book is targeted to precious commodity: experienced cybersecurity professionals. Aren’t reactive systems that require specialized human expertise, like EDR, on their way out?

Hand:   I don’t believe so. I think the biggest problem is in reactivity and how it forces us to use our more experienced engineers. Let’s say that there is some cool new post-exploitation technique circulating. Should I pull my most experienced engineers away from building proactive defenses to test, validate, and remediate any issues or should I rely more on my vendor(s) to ensure we’re covered? If a vendor can identify and shore up a deficiency in their product, it would benefit all customers and not just those with the technical expertise to throw at the problem.

Looking beyond this, if we accept the fact that we have a staffing shortage and truly senior engineers are rare, we have two options – forge more engineers or use ours more effectively. Right now, the impact an engineer has is typically limited to their own organization. For instance, if an engineer writes a detection to catch that cool new post-exploitation technique, the outside world will likely never know.

What if instead of keeping the output of the hard work that goes into extending the usefulness of an EDR (research, writing detections, tuning, etc.), we shared that information openly with others in the industry so that everyone can benefit from it? If a surgeon finds a cool new method to perform an operation that has better patient outcomes, do they squirrel it away at their hospital or do they publish it to a journal and teach others?

 LW: Where do you see EDR fitting in 10 years from now? Does it have a place in the leading-edge security platforms and frameworks that are shifting more to a focus on proactive resiliency at the cloud edge, instead of reactive systems on endpoints?

Hand:   Yes, 100%. At the end of the day, an endpoint is any system that runs code, whether those be workstations, servers, mobile devices, cloud systems, ICS, or any other type of system. The nature of endpoints has and will continue to change, but there will always be endpoints that need defending. Perimeter defense has also been around for ages, but now the nature of the perimeter is changing.


Trying to decide which is more important isn’t the conversation we should be having. Rather, we should accept that proactive hardening and increasing the resiliency of Internet-facing systems, which would fall into a “prevention” category, is equally as important as ensuring that we can catch an adversary that slips through the cracks. Realistically, if a motivated and well-resourced attacker wants to get into your environment, they will.

It’s just a matter of time. If we accept that fact, we should spend our limited time and resources making it reasonably difficult to breach the perimeter (MFA, asset management, inbound mail filtering, training) while also preparing for the inevitability of a breach by implementing robust detective controls that can catch an adversary as early in their attack chain as possible to reduce the impact of the breach and allow responders to more confidently evict them.


Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInEmail this to someone