Post-Incident Threat Hunting: Finding What Automated Tools Missed

TLDR

After a security incident, automated tools tell you what happened. Threat hunting tells you what else happened. Most organizations stop investigating once their EDR and SIEM systems identify the initial compromise. Attackers count on this.

Organizations that stop investigating once automated tools identify the initial breach leave attackers’ backup access in place. Drawing from offensive cyber operations experience, this post examines the critical techniques and indicators that automated detection misses during post-incident response.


The Investigation That Stopped Too Soon

In mid-2020, investigators at the U.S. Department of Justice detected something unusual in their systems. They saw anomalous activity that didn’t match normal patterns. Following standard procedure, they contacted SolarWinds, the vendor behind the Orion IT monitoring platform they used, assuming there was an exploitable vulnerability in the software.1

SolarWinds investigated. No vulnerability was found. Communications between the DOJ and SolarWinds between May and July 2020 led to the case being deemed insignificant by the DOJ.1 They were so confident in this conclusion that in August 2020, the DOJ purchased additional Orion licenses, suggesting they were “satisfied” that there was no further threat posed by the Orion software.1

Four months later, in December 2020, cybersecurity firm FireEye discovered they’d been breached while investigating the theft of their Red Team security tools.2 During their investigation into their own compromise, they uncovered the truth: SolarWinds’ build system had been compromised by sophisticated nation-state attackers. The malicious code had been distributed to roughly 18,000 customers through legitimate software updates.3

Subsequent threat hunting by ReversingLabs revealed that attackers had actually been inside SolarWinds systems since October 2019, more than a year before discovery.4 Investigations reported by journalist Kim Zetter showed that firms like Volexity, Mandiant, Microsoft, and the DOJ itself had come across evidence of the compromise as much as six months earlier. They did not realize its significance.1

The DOJ’s “insignificant” detections from mid-2020 weren’t false positives. They were breadcrumbs left by one of the most sophisticated supply chain attacks in history. The automated tools did their job by generating alerts. The human investigation stopped too soon. Only rigorous post-incident threat hunting revealed what actually happened.


What Automated Tools Actually Detect

Understanding why the DOJ missed these indicators requires examining what automated tools can and cannot see.

Automated security tools excel at what they’re designed for. EDR platforms catch known malware signatures and flag behavioral patterns that match established threat models. SIEM systems correlate logged events against known attack sequences. Network monitoring identifies C2 communications that follow recognizable protocols or connect to flagged infrastructure. These capabilities are essential components of any defense strategy.

But they share a fundamental limitation: automated detection requires either signatures of known threats or patterns that deviate significantly from normal activity. Sophisticated attackers understand this and deliberately avoid both.

Living-off-the-land techniques use legitimate system tools for malicious purposes. An attacker using PowerShell remoting between domain-joined systems looks identical to a system administrator performing routine maintenance. Scheduled tasks created through standard Windows administrative interfaces generate the same logs whether they’re launching backup scripts or persistence mechanisms. Low-and-slow data exfiltration that stays within normal volume thresholds won’t trigger anomaly-based detection, even though the data is leaving your environment.

Lateral movement using valid credentials and approved protocols creates log entries, but those entries don’t automatically indicate compromise. An attacker who harvested domain admin credentials can authenticate to file servers, database systems, and cloud resources using the exact same methods as the legitimate account holder. The logs exist. The automated tools process them. But without the context of broader attack patterns, they generate no alerts.

This presents a critical challenge drawn directly from offensive operations: when conducting red team engagements, success means avoiding automated detection while still accomplishing objectives. The techniques that defeat automated prevention also defeat automated investigation. Every capability that allows a red team to operate undetected is available to actual adversaries.


The Threat Hunting Mindset

Recognizing these limitations changes how we investigate incidents.

Post-incident threat hunting operates from a fundamentally different premise than automated investigation. Automated tools answer the question “What triggered our alerts?” Threat hunting answers “What would I do if I were the attacker?”

This distinction matters because it changes the entire investigative approach. When automated tools identify an incident, the natural response is to scope the compromise based on what those tools can see. Threat hunting assumes the automated tools missed something and works backward from attacker objectives rather than forward from logged events.

Several core assumptions drive effective post-incident hunting. First, the initial compromise identified by automated tools likely wasn’t the first attempt. Attackers rarely succeed immediately. They probe, test, and refine their approach. Those preparatory activities often stay below detection thresholds but leave artifacts in your environment.

Second, adversaries establish persistence beyond the discovered method. Finding one backdoor doesn’t mean you’ve found the only backdoor.

Third, lateral movement probably extended further than what generated alerts. Attackers map networks, harvest credentials, and move between systems using techniques specifically designed to blend with normal administrative activity.

Fourth, data collection occurred that didn’t trigger volume-based alerts. Exfiltration doesn’t require massive data transfers if the attacker has time and patience.

Applying the offensive operator perspective to defense means thinking through the practical realities of how attacks work. Attackers prepare escape routes and backup access points before they need them. Persistence mechanisms layer across multiple systems because single points of failure are unacceptable. Credential harvesting happens early and broadly to enable future movement options. Reconnaissance activities deliberately mimic normal traffic patterns to avoid standing out in logs.

Threat hunting therefore follows a hypothesis-driven approach. Start with attacker objectives: What were they actually after? A nation-state pursuing intellectual property behaves differently than ransomware operators seeking maximum leverage.

Map probable attack paths based on those objectives. An attacker targeting your engineering documentation will move toward file servers and code repositories. An attacker targeting financial systems will pursue database access and financial application credentials. Investigate specific artifacts along those probable paths. Don’t limit investigation to systems that generated alerts.

The time horizon also expands dramatically. Automated tools focus on the incident timeline. Threat hunting extends weeks or months backward, looking for preparatory activities that weren’t anomalous at the time but become significant when viewed through the lens of a confirmed compromise.


Specific Artifacts Worth Manual Investigation

Knowing what to hunt for is only half the challenge. Certain categories of evidence require human analysis precisely because they don’t trigger automated alerts. These artifacts exist in your logs and on your systems, but their significance only becomes clear when examined with knowledge of attacker tradecraft.

Authentication Anomalies

Authentication patterns requiring human analysis include valid credential use across unexpected system combinations. An account that normally authenticates to workstations and email suddenly authenticating to database servers might indicate credential compromise. But if the volume and timing fall within normal parameters, automated systems won’t flag it.

Administrative actions during non-standard hours that didn’t exceed threshold-based detection rules deserve scrutiny. A system administrator working at 2 AM once generates no alerts. The same administrator account accessing sensitive systems at 2 AM every night for three weeks tells a different story.

Service account access patterns that changed subtly over time can indicate an attacker establishing operational patterns. VPN sessions from expected geographic regions but with unexpected device fingerprints or browser characteristics suggest compromised credentials rather than compromised endpoints.

Hidden Persistence

Persistence mechanisms often hide in plain sight because they use legitimate system functionality. Registry keys created by legitimate system processes can contain malicious payloads if an attacker has the access to modify them. Scheduled tasks that execute once monthly or quarterly stay well below the frequency thresholds that trigger automated detection.

WMI subscriptions using standard system objects for event triggers look identical to legitimate management automation. DLL search order hijacking in rarely-executed applications won’t surface until that application runs. Even then, the behavior may appear normal if the hijacked DLL performs its expected functions alongside malicious ones.

Stealthy Data Movement

Data staging operations that evade detection require understanding how attackers actually move data. Compression operations split across multiple days, each well below suspicious volume thresholds, can prepare gigabytes for exfiltration. File movements to intermediate locations using standard administrative tools like robocopy or PowerShell file cmdlets generate expected logs.

Cloud sync clients configured with new destinations appear as legitimate software making legitimate connections. Email rules that forward copies to seemingly legitimate addresses leave minimal forensic evidence unless you’re specifically hunting for unauthorized mail forwarding rules.

Legitimate-Looking Lateral Movement

Lateral movement via approved channels represents one of the most challenging detection problems. RDP sessions between systems with valid business justification generate the same logs whether initiated by administrators or attackers using compromised credentials. File transfers using standard corporate tools like SharePoint, OneDrive, or internal file shares don’t trigger alerts based on the transfer mechanism alone.

Script execution via legitimate management frameworks like SCCM, Ansible, or PowerShell remoting creates expected telemetry. PowerShell remoting within administrative boundaries looks identical to legitimate system administration.

Network Traffic in Plain Sight

Network communications hiding in normal traffic require context to identify. HTTPS connections to recently-registered domains that haven’t yet been categorized as malicious blend with thousands of other HTTPS connections. DNS queries for legitimate services that also resolve to command infrastructure won’t stand out without understanding the full attack chain.

Cloud storage uploads during business hours at normal volumes could represent legitimate work or steady data exfiltration. Protocol tunneling through approved proxy connections appears as expected network behavior.

None of these activities individually trigger alerts. Combined and viewed through the lens of attacker operations, they represent deliberate tradecraft. Context and timing relationships reveal malicious intent that rules-based detection cannot define. Human analysts trained to think like attackers recognize operational patterns that automated correlation engines miss.


Building Effective Post-Incident Hunting Programs

Building sustainable programs requires strategic thinking about resources and integration.

Resource allocation for post-incident hunting requires strategic prioritization. Organizations cannot hunt exhaustively after every security event. The decision to conduct deep threat hunting should be based on adversary sophistication and the scope of access they achieved. Targeted attacks by skilled adversaries require far more intensive hunting than opportunistic malware infections.

Small incidents with clearly defined objectives and limited access may not warrant extensive hunting efforts beyond initial response activities.

Documentation and knowledge transfer transform individual hunts into organizational capability. Recording what you looked for matters as much as recording what you found. Negative findings have value because they confirm hypotheses were wrong and prevent duplicate effort in future investigations.

Building organizational memory about attacker techniques, even when those techniques didn’t succeed in a particular environment, helps detection engineering teams understand gaps in coverage. Findings should feed directly back into detection rule development, creating a continuous improvement cycle.

Integration with incident response workflows determines when hunting occurs and how findings are used. Threat hunting isn’t optional for incidents involving targeted attacks, particularly those with evidence of sophisticated tradecraft or extended dwell time. The decision to initiate hunting should be made during initial incident scoping based on indicators of adversary capability and the potential business impact.

Balance immediate containment needs with thorough investigation. Rushing to containment without understanding full attacker presence risks leaving persistent access behind.

Skills and training directly impact hunting effectiveness. The most effective threat hunters understand attacker operational security because they’ve practiced it themselves. Red team experience translates directly to hunting capability because it builds intuition for how skilled adversaries operate and what traces they leave.

Offensive cyber training develops the adversarial mindset required to hypothesize about attacker behavior. Organizations should invest in building these skills internally or partnering with teams that already possess them.


The Cost of Stopping Too Soon

The Department of Justice had the evidence they needed in mid-2020. Their automated tools generated alerts. Their investigators examined the data. They reached a conclusion and moved on.

That conclusion was wrong. The cost of stopping the investigation too soon extended far beyond the DOJ itself.

Post-incident threat hunting operates on two critical assumptions. First, it assumes your automated tools worked exactly as designed. They collected logs, generated alerts, and flagged anomalies according to their programming.

Second, it assumes the attacker is competent enough to work around those tools. These aren’t pessimistic assumptions; they’re realistic ones that protect you from adversaries who actually matter.

The question isn’t whether your automated detection failed. The question is what else happened that your automated detection couldn’t see. The time to answer that question is immediately after an incident, while evidence is fresh and response teams are mobilized.

Waiting until the attacker returns with the access they quietly maintained isn’t a strategy. It’s hope disguised as security.


References

  1. “The Week in Security: Sunburst attack set off alarms for months before discovery,” ReversingLabs Blog – https://www.reversinglabs.com/blog/the-week-in-security-solarwinds-orion-hack-set-off-alarms-for-months-before-discovery
  2. “The SolarWinds hack timeline: Who knew what, and when?” CSO Online, June 2021 – https://www.csoonline.com/article/570537/the-solarwinds-hack-timeline-who-knew-what-and-when.html
  3. “SolarWinds hack explained: Everything you need to know,” TechTarget – https://www.techtarget.com/whatis/feature/SolarWinds-hack-explained-Everything-you-need-to-know
  4. “SolarWinds Likely Hacked at Least One Year Before Breach Discovery,” SecurityWeek, January 2023 – https://www.securityweek.com/solarwinds-likely-hacked-least-one-year-breach-discovery/

Final CTA Section
GET STARTED

Ready to Strengthen Your Defenses?

Whether you need to test your security posture, respond to an active incident, or prepare your team for the worst: we’re ready to help.

📍 Based in Atlanta | Serving Nationwide

Discover more from Satine Technologies

Subscribe now to keep reading and get access to the full archive.

Continue reading