When Red Teams Find What Compliance Audits Miss: Lessons from Financial Services

TLDR

Compliance audits and red team engagements serve different purposes and find different problems. Financial institutions that treat compliance as sufficient security miss the attack paths adversaries actually use. This piece examines specific technical gaps between checkbox validation and offensive testing, drawn from lessons in financial services environments.


The Gap Appears After the Audit Passes

In 2019, Capital One discovered that an attacker had accessed personal information belonging to over 100 million credit card applicants [1]. The breach occurred through a misconfigured web application firewall in their AWS environment. According to the subsequent investigation, the vulnerability allowed an attacker to access data from Capital One’s cloud storage buckets through what the FBI described as a “command injection vulnerability” [1].

Here’s what makes this instructive: Capital One wasn’t lacking in compliance credentials or security certifications. They had frameworks in place, regular audits, and documented controls. Individual components were configured according to specifications. But the specific misconfiguration that enabled the breach (the relationship between the firewall, the server role permissions, and the storage access) created an attack path that traditional compliance validation didn’t catch [2].

The attacker didn’t break through properly configured controls. They exploited the spaces between them, chaining together access in a way that security documentation and compliance frameworks don’t typically test. The vulnerability existed not because a control was absent, but because the interaction between multiple systems created an exposure that only became visible when someone tested it the way an adversary would.

Compliance audits measure whether controls exist. Offensive testing measures whether those controls stop adversaries when chained together in ways organizations don’t anticipate. That difference matters.


What Compliance Audits Are Designed to Find

Compliance audits serve a specific and necessary function. They verify that documented controls exist, that baseline security hygiene is maintained, and that organizations meet regulatory requirements like GLBA, PCI-DSS, or SOC 2. Auditors review evidence that policies and procedures are written, approved, and implemented according to the framework’s specifications.

This approach establishes value. It creates a minimum security baseline across the financial services industry. Organizations become accountable for documented processes. Obvious gaps in foundational controls get identified and remediated. When regulators or legal counsel need to demonstrate due diligence, compliance audits provide that defensibility.

But the methodology has inherent constraints that matter for security outcomes.

Compliance frameworks test controls in isolation, not as adversaries would chain them together. An auditor validates that network segmentation exists, that firewall rules are documented and reviewed, that access controls are in place. They examine evidence: configuration screenshots, policy documents, access review logs, change management tickets. The assessment confirms these controls are present and operating as specified.

What compliance audits don’t do is attempt to bypass those controls. They don’t test whether the segmentation actually contains lateral movement when an attacker compromises a system. They don’t verify whether the firewall rules prevent the specific attack paths that matter. They don’t measure if the access controls stop unauthorized access under adversary pressure.

The assessment happens annually or periodically, creating snapshots rather than continuous validation. Auditors work within the scope defined by the compliance framework, not the scope defined by the current threat landscape. They assess whether your controls match the requirements, not whether those requirements are sufficient against the adversaries targeting financial institutions.


What Red Teams Actually Test

Red team engagements start with a different question. Instead of asking whether controls exist, we ask whether we can accomplish an adversary’s objectives despite those controls. The methodology simulates how attackers actually operate: identifying initial access, establishing persistence, moving laterally, escalating privileges, and accessing target data.

We chain multiple small issues into critical access. A permissive firewall rule that seems minor becomes significant when combined with a service account that has excessive permissions and a monitoring gap that prevents detection. Each individual issue might be low severity in a vulnerability scan. Together, they create a path to core systems.

Red teams test controls under pressure rather than reviewing them in isolation. We validate whether defenses detect our activities and whether response procedures actually work when someone is actively evading them. We measure time to detection, quality of alerts, and effectiveness of containment. We exploit the gaps between systems that are individually compliant but collectively vulnerable.

The questions differ fundamentally from compliance validation. Compliance asks if multi-factor authentication is enabled. We ask if we can compromise the MFA implementation through session hijacking, push notification fatigue, or SIM swapping. Compliance asks if access reviews are conducted quarterly. We ask if we can gain access that a reviewer wouldn’t notice because it looks like legitimate service account activity.

We think in attack paths, not control frameworks. We find the seams between security domains: the boundary where network security meets application security, where IT controls meet business process, where technical enforcement meets human judgment. We test the assumptions that documentation takes for granted.


Five Technical Gaps We Consistently Find

These differences manifest in specific, recurring patterns we see across financial institutions.

Lateral Movement Paths in “Compliant” Environments

Network segmentation looks good on architecture diagrams. VLANs are properly configured. Firewall rules are documented and reviewed. Then we compromise a workstation and discover service accounts with domain credentials stored in scheduled tasks. Administrative jump hosts technically live in the right VLAN but have trust relationships to multiple security zones.

Monitoring tools require access everywhere to collect logs, creating paths the segmentation design never accounted for. Compliance validated that segments exist. Nobody tested whether they contain an attacker who has compromised a system inside them.

API Security Beyond Authentication Presence

Auditors confirm that APIs require authentication, enforce TLS, and generate logs. All true. All compliant. All insufficient. We find authorization logic that validates authentication but not whether users should access this specific customer’s data.

Rate limiting is configured so permissively that credential stuffing attacks succeed before triggering alerts. We chain API calls in sequences developers never intended, bypassing business logic controls. The difference between “authentication exists” and “authorization is correctly enforced across all access paths” determines whether customer data stays protected.

Third-Party Integration Assumptions

Vendor security questionnaires are completed. Service level agreements include security requirements. Integration documentation is reviewed and approved. Meanwhile, vendor API keys from a proof-of-concept project three years ago still have production database access.

Shared credentials exist in integration documentation that multiple people across two organizations can access. VPN connections established for vendor support never get revoked when projects end. Compliance validated that vendor relationships are documented. It didn’t verify that vendor access matches current business need or follows least privilege.

Privilege Escalation Through Business Process

We find application workflows that grant temporary elevated access to process certain transactions. Approval workflows can be manipulated by submitting requests at specific times or through specific channels. Emergency access procedures exist for legitimate operational needs but lack real-time monitoring.

Service desk processes allow password resets with verification that sounds secure in procedure documents but fails under social engineering pressure. Business process “exceptions” become attack vectors because they’re documented as legitimate workflow rather than tested as potential exploitation paths. Access reviews confirm proper permissions, but they don’t catch what the business process allows.

Detection and Response Effectiveness

We move laterally for days before generating an alert. When alerts fire, they don’t correlate into actionable signals that indicate an intrusion in progress. Response procedures that read well in documents break down under the pressure and uncertainty of an active incident. Defenders have logs but lack the context to distinguish our malicious activity from legitimate administrative actions.

Having monitoring tools and having effective threat detection are not the same thing.


Practical Implications for Financial Institutions

Financial institutions need both compliance audits and offensive testing. They serve different purposes and neither substitutes for the other.

Compliance provides the foundation. It establishes baseline controls, creates accountability for documented processes, and delivers the regulatory defensibility that financial institutions require. Red team engagements validate that the foundation holds under adversary pressure. They reveal whether your defenses actually work the way your documentation says they should.

Resource allocation shouldn’t treat these as competing priorities. Don’t defund compliance programs to do offensive testing. Don’t assume compliance alone equals security. Both matter, but for different reasons.

Leadership needs clear framing: Compliance is required for regulatory and legal positioning. Offensive testing is required to know if your defenses actually work against the adversaries targeting financial institutions. One satisfies auditors. The other reveals what attackers will find.

Frequency considerations differ too. Annual compliance assessments make sense for baseline validation and regulatory cycles. Continuous or quarterly offensive testing matters for critical paths and high-value assets. Focused red team exercises after significant infrastructure changes, major application deployments, or cloud migrations catch issues before adversaries do.

Organizations that understand this distinction make better security decisions. They budget for both. They use offensive findings to improve security posture and compliance documentation simultaneously.


Testing Assumptions vs. Documenting Controls

Capital One had compliance frameworks and documented controls. What they didn’t have was validation that the specific configuration of their cloud infrastructure would resist the attack path an adversary eventually exploited. Both compliance and offensive testing had value. Understanding what each reveals makes the difference.

Financial institutions face adversaries who don’t check whether you’re compliant before they attack. They test your assumptions every day: that your segmentation contains them, that your detection actually detects, that the spaces between your properly configured controls aren’t exploitable.

When was the last time your defenses were tested the way adversaries would test them?

Organizations that understand this distinction allocate resources differently. They maintain compliance programs and conduct offensive testing. They use red team findings to improve both security posture and compliance documentation. They recognize that passing an audit and stopping an adversary require different types of validation.

Start with one high-value path: your most critical system and the access path that would cause the most damage if compromised. Ask someone to attempt what an adversary would attempt, not to validate that controls exist. Document what they find in the spaces between your compliant systems. Use those findings to close gaps that compliance frameworks don’t measure.

Your compliance audit tells you whether controls exist. Offensive testing tells you whether they work.


References

[1] Federal Bureau of Investigation. (2019). Paige A. Thompson Criminal Complaint (Case No. MJ19-0350). United States District Court, Western District of Washington.

[2] Krebs, B. (2019). “What We Can Learn from the Capital One Hack.” Krebs on Security. https://krebsonsecurity.com/2019/08/what-we-can-learn-from-the-capital-one-hack/

Final CTA Section
GET STARTED

Ready to Strengthen Your Defenses?

Whether you need to test your security posture, respond to an active incident, or prepare your team for the worst: we’re ready to help.

📍 Based in Atlanta | Serving Nationwide

Discover more from Satine Technologies

Subscribe now to keep reading and get access to the full archive.

Continue reading