This week we’re introducing a new blog series: Satine Sentinel, our weekly analysis of cyber incidents that matter.
Every week, we’ll analyze 3-5 significant attacks, breaking down what happened, how the attack worked, and why defenders should care. You’ll get technical details from an offensive operator’s perspective, not vendor marketing or surface-level reporting. For particularly significant incidents, we may even do a separate deep dive.
This isn’t another breach roundup. We focus on incidents demonstrating adversary TTP evolution, supply chain risk, or novel attack patterns worth understanding. Each incident includes attack methodology, defensive implications, and links to vendor analysis and technical writeups.
This week: three supply chain attacks broke trust in developer tooling, SaaS integrations, and emergency response systems, showing how cascading failures turn third-party dependencies into single points of catastrophic failure.
Shai-Hulud 2.0: Self-Replicating npm Worm Returns
What happened:
On November 24, 2025, security researchers identified a second wave of the Shai-Hulud npm supply chain attack, dubbed “The Second Coming” by the attackers. The worm compromised 796+ npm packages totaling over 20 million weekly downloads, including packages from Zapier, ENS Domains, PostHog, and Postman. The attack executed during the preinstall phase, dramatically widening impact across developer machines and CI/CD pipelines.
Over 500 GitHub users had credentials exfiltrated to public repositories marked “Sha1-Hulud: The Second Coming,” with 25,000+ malicious repos created. The attack peaked early morning November 24 UTC, creating ~1,000 new repos every 30 minutes.
Technical details that matter:
- Execution timing shift: Moved to preinstall lifecycle scripts, executing before installation completes or even when installation fails; expanding the attack surface beyond the September variant
- New payload delivery: Disguised as Bun installer (setup_bun.js, bun_environment.js) to evade Node.js-focused detection
- Multi-stage credential harvesting: TruffleHog-based automated scanning for npm tokens, GitHub PATs, AWS/GCP/Azure credentials, SSH keys, and cloud provider secrets from disk and memory
- Destructive failsafe: If unable to replicate or exfiltrate, worm attempts to securely overwrite and delete the victim’s entire home directory – shifting from pure theft to punitive sabotage
- GitHub-based C2: Establishes persistence via self-hosted GitHub Actions runners, enabling remote code execution through GitHub’s native Discussion feature (workflow triggers on discussion creation)
- Self-propagation mechanism: Uses stolen npm credentials to automatically backdoor up to 100 packages per compromised maintainer account
Why you should care:
This represents supply chain risk at cloud-native scale. The 20% of infections on GitHub Actions runners means CI/CD pipeline compromise: the infrastructure most organizations use for software deployment. When your build pipeline is compromised, every deployment becomes a potential backdoor installation. The memory-scraping capabilities captured runtime secrets that never appeared in code repositories, including production database credentials and API keys with elevated privileges. Entro Security analysis found exfiltrated data from 1,195 distinct organizations including major banks, government bodies, and Fortune 500 technology firms, including one compromised semiconductor company’s self-hosted GitHub Actions runner exposed production environment secrets. The destructive failsafe shows adversary willingness to cause operational damage when detection occurs, turning incident response into a race against data destruction.
Key sources:
- Unit 42 analysis: https://unit42.paloaltonetworks.com/npm-supply-chain-attack/
- Datadog Security Labs technical breakdown: https://securitylabs.datadoghq.com/articles/shai-hulud-2.0-npm-worm/
- Wiz Research impact assessment: https://www.wiz.io/blog/shai-hulud-2-0-ongoing-supply-chain-attack
- CISA alert: https://www.cisa.gov/news-events/alerts/2025/09/23/widespread-supply-chain-compromise-impacting-npm-ecosystem
Gainsight OAuth Token Breach: 200+ Salesforce Instances Compromised
What happened:
On November 20, 2025, Salesforce disclosed unauthorized access to customer data through compromised Gainsight-published applications. Google Threat Intelligence Group confirmed over 200 potentially affected Salesforce instances, with ShinyHunters/Scattered Lapsus$ Hunters claiming responsibility. Attack activity began November 8, 2025 with reconnaissance, followed by unauthorized access between November 16-23 from IP addresses associated with commercial VPN services, Tor network, and AWS.
This is the second major Salesforce supply chain attack in three months; the previous Salesloft Drift breach in August affected 760+ companies. ShinyHunters claims the combined Salesloft and Gainsight campaigns allowed them to steal data from nearly 1,000 organizations. Salesforce revoked all OAuth tokens associated with Gainsight apps and temporarily removed them from AppExchange. HubSpot and Zendesk integrations also suspended as precautionary measure.
Technical details that matter:
- OAuth token theft chain: Attackers gained access to 285 Salesforce instances after breaching Gainsight via secrets stolen in the previous Salesloft Drift breach, demonstrating how initial compromises create cascading supply chain effects
- User agent fingerprinting: Malicious user agent string “Salesforce-Multi-Org-Fetcher/1.0” observed in both Gainsight and Salesloft Drift attacks; consistent TTP signature across campaigns
- API abuse without platform vulnerability: Legitimate Gainsight Connected App credentials used for unauthorized API calls from non-whitelisted IPs – all activity appeared as valid application traffic
- Token lifecycle persistence: Three-month access window from initial August Gainsight compromise through November disclosure; attackers maintained persistent access through refresh token mechanisms
- Cross-platform exposure: Same Gainsight OAuth tokens potentially granted access to any connected service (Salesforce, HubSpot, Zendesk); single credential compromise creates multi-platform risk
Why you should care:
This attack pattern targets the SaaS integration trust model that virtually all organizations depend on. When your CRM, customer success platform, and support systems share OAuth credentials, a single third-party compromise exposes your entire customer data ecosystem. The breach shares strong similarities with the Salesloft Drift attacks three months prior, linked to the same threat cluster (ShinyHunters/UNC6240), indicating sustained focus on Salesforce ecosystem supply chain—this is the same threat cluster’s third major campaign against Salesforce integrations in 2025. For organizations using Salesforce, these OAuth-based attacks can bypass their perimeter security entirely.
The data accessed includes business contact details, licensing information, and support case contents, exactly the relationship intelligence that enables social engineering attacks against high-value targets. The three-month persistence window before detection means adversaries had time to map your organization, identify high-privilege accounts, and establish additional footholds.
Key sources:
- Salesforce security advisory: https://status.salesforce.com/generalmessages/20000233
- Google GTIG analysis: https://techcrunch.com/2025/11/21/google-says-hackers-stole-data-from-200-companies-following-gainsight-breach/
- Gainsight IoC details: https://www.helpnetsecurity.com/2025/11/26/gainsight-breach-salesforce-details-attack-window/
- CyberScoop investigation: https://cyberscoop.com/salesforce-gainsight-customers-breach/
CodeRED Emergency Alert System: INC Ransom Cripples Critical Infrastructure
What happened:
On November 25-26, 2025, Crisis24 confirmed its OnSolve CodeRED platform suffered a ransomware attack that disrupted emergency notification systems used by state and local governments, police departments, and fire agencies across the United States. INC Ransom group gained unauthorized access on November 1, 2025, followed by file encryption on November 10 that triggered a nationwide outage. The attack forced Crisis24 to permanently decommission the legacy CodeRED environment, and restore functionality using backups from March 31, 2025.
INC Ransom published screenshots showing customer data including email addresses and clear-text passwords, plus negotiation logs showing initial $950,000 ransom demand reduced to $450,000, with Crisis24 offering $100,000-$150,000 which was rejected. Douglas County Sheriff’s Office in Colorado terminated their CodeRED contract entirely. Stolen data includes names, addresses, email addresses, phone numbers, and clear-text passwords used for CodeRED user profiles.
Technical details that matter:
- Critical infrastructure impact: CodeRED is widely used to push urgent notifications for severe weather, public safety incidents, missing persons and other critical situations. This attack disabled emergency communications during wildfire season and severe weather events
- Plaintext password storage: INC Ransom leak shows clear-text passwords in CodeRED’s database, a fundamental security failure storing authentication credentials without proper hashing/salting
- Extended dwell time: 9-day gap between initial access (November 1) and encryption deployment (November 10) allowed comprehensive data exfiltration before detection
- Backup restoration limitations: Available data from March 31, 2025 backup means 7+ months of registrations lost; requires manual re-enrollment for users who registered after that date
- No redundancy architecture: Single point of failure design with no failover capability; when CodeRED went down, municipalities had zero emergency notification capability
- Credential stuffing risk: Password reuse combined with cleartext storage creates cascading compromise risk across users’ other accounts
Why you should care:
This demonstrates that mission-critical public safety infrastructure operates on surprisingly fragile technical foundations. When CodeRED goes down or cannot be trusted, communities may miss evacuation orders, severe weather warnings, or active-shooter alerts when minutes matter. For hospitals with emergency department surge alerts, utilities with power restoration notifications, or government agencies with shelter-in-place systems, you’re likely using similar third-party mass notification platforms with comparable security postures. Crisis24’s FAQ section states “Unfortunately, there have been rising cybersecurity risks and penetrations across many organizations as of late”, which heavily implies they believe cyber incidents with disastrous effects are unavoidable. This is, of course, not true; while ransomware can be difficult to prevent, storing passwords in cleartext has been inexcusable for decades.
The plaintext password storage indicates CodeRED was built without basic security architecture, suggesting other legacy emergency systems may have similar design flaws. Law enforcement agencies terminated contracts due to lack of proactive notification about the outage and breach, forcing them to rely on social media and door-to-door alerts as temporary measures. When your emergency notification system fails, you discover you have no backup communication channel to inform responders or citizens. The November timing (wildfire season in California, tornado season in South, approaching winter storms) maximized operational impact when communities most needed reliable alert capabilities.
Key sources:
- BleepingComputer investigation: https://www.bleepingcomputer.com/news/security/onsolve-codered-cyberattack-disrupts-emergency-alert-systems-nationwide/
- The Register coverage: https://www.theregister.com/2025/11/26/codered_emergency_alert_ransomware
- Malwarebytes analysis: https://www.malwarebytes.com/blog/news/2025/11/millions-at-risk-after-nationwide-codered-alert-system-outage-and-data-breach
- SecurityWeek report: https://www.securityweek.com/ransomware-attack-disrupts-local-emergency-alert-system-across-us/
The Pattern this Week
Supply chain attacks aren’t just compromising software dependencies, they’re breaking the trust models that modern operations depend on. When your npm packages steal credentials, your SaaS integrations leak OAuth tokens, and your emergency notification vendor stores passwords in cleartext, you’re learning that third-party risk isn’t a compliance checkbox. It’s the primary attack surface. Every integration is a potential breach, every vendor a possible single point of failure, and every “trusted” system an opportunity for adversaries to scale their operations across hundreds or thousands of downstream victims simultaneously.
See you next week.

