WIREGRID · Breach Intel · Weekly Reflections

Breach news across the wire.

A running log of the largest publicly-disclosed data breaches — victim, vector, impact, attribution, and what each incident means for practitioners. Updated weekly. Sources are public disclosures, SEC 8-Ks, and authority announcements; my notes are opinions, not official analysis.

Threat meter ↓ RSS ↗ If this just happened to you → ActionGrid
Filter by vector
Ransomware Supply Chain Vuln Chain Insider Misconfig
Breach Log

Across the wire.

Expand a month to see the week-by-week entries. Each entry links to the primary disclosure and includes my 200-400 word take on what it means for anyone defending a similar stack.

[Week of 2026-04-15] · Delta Medical Network Healthcare · 6.1M patient records
VictimDelta Medical Network — 14-hospital regional system, ~28,000 employees
IndustryHealthcare
Records6.1M patient records · names, DOB, SSN, diagnosis codes, partial insurance claims data
VectorRansomware — Citrix NetScaler appliance compromised via known CVE, lateral movement to VMware ESXi hosts, encryption of clinical file shares and Epic backup volumes
AttributionRansomHub affiliate (self-claimed on leak site)
DisclosureHHS OCR breach portal filing; follow-up 8-K from parent holding company

What it means for practitioners

If you run a hospital system and you are still operating with Epic recovery that lives on the same VMware cluster as production, this breach is an indictment of your backup architecture, not a surprise attack. The Citrix entry point is a recurring story — the appliance was a patch behind, and the attackers almost certainly got in through something that had been in CISA KEV for months. I would not waste time on the ransomware narrative. The real question is why an internet-facing NetScaler could reach ESXi management on TCP 443 without a jump host, and why the backup repository credentials were domain-joined.

Three things to audit this week. First, pull your NetScaler, Ivanti, Fortinet, and Palo Alto appliance inventory against KEV and confirm patch-lag in days, not weeks — see KEV as backlog for how I structure that list. Second, confirm that your Veeam, Rubrik, or Cohesity proxies cannot be authenticated-to with the same service account that runs daily domain operations; if the ransomware operator pivots from vCenter to backup in one hop, your RTO is fiction. Third, if you are in healthcare, run the tabletop where Epic Hyperspace is offline for 72 hours and your downtime procedures are paper — most clinical directors I have talked to have not rehearsed past hour 12.

The OCR fine will land in 2027. The operational cost — diverted ambulances, cancelled surgeries, locum staffing — is already in the tens of millions. If your cyber-insurance renewal is this quarter, expect the carrier to ask for proof of immutable backups and out-of-band recovery networks. Have the diagrams ready.

Ransomware Healthcare VMware Citrix
[Week of 2026-04-08] · Corestack Cloud SaaS · ~200 downstream tenants
VictimCorestack Cloud — mid-market observability SaaS, ~1,800 paying customers
IndustrySaaS / DevOps tooling
RecordsAPI keys and OAuth tokens scraped from ~200 customer tenants over 11 days — primarily AWS IAM access keys, GitHub PATs, Datadog keys
VectorSupply-chain — malicious version of @corestack/agent-sdk published to npm after typosquat on a maintainer 2FA recovery email; postinstall script exfiltrated env vars to Cloudflare Worker under attacker control
AttributionUnattributed; TTPs consistent with the npm-targeting cluster active since mid-2025
DisclosureVendor blog post, coordinated with GitHub Security Lab; CISA advisory AA26-099A

What it means for practitioners

The failure mode here is the one I keep flagging in vendor reviews: the maintainer account had 2FA, but the recovery email still pointed at a personal Gmail from 2019. npm token compromise is the oldest story in the supply-chain playbook, and it still works because most SaaS shops do not gate publish with hardware-backed attestation. If your build pipeline installs anything via npm or pip without a lockfile pin plus a cosigned provenance check, you are one typosquat away from being entry 201 on this list.

For defenders, the concrete move this week is to pull every CI/CD runner's outbound egress log for the last 30 days and look for calls to newly-registered Cloudflare Workers, Vercel functions, and ngrok tunnels. That is where postinstall beaconing hides. Second, rotate every long-lived cloud credential that sat in an environment variable on a build runner — if you cannot produce the rotation timestamp for your GitHub Actions AWS_ACCESS_KEY_ID in the last 90 days, that is your remediation. Third, this is exactly the pattern supply-chain breach — your responsibilities is about: you do not get to point at the vendor. Your AWS root account got touched because you shipped the agent.

On the vendor side, Corestack did two things right and one thing wrong. Right: they pulled the package within 4 hours of the SBOM diff alert firing, and they published per-customer IOC lists (Worker hostnames, hashes). Wrong: their incident page still says "no customer data accessed" while simultaneously telling customers to rotate AWS keys. Those statements cannot both be true. Read the disclosure skeptically.

Supply chain SaaS npm DevOps
[Week of 2026-04-01] · Anvilford Industries Manufacturing / F500 · ~340 GB source code
VictimAnvilford Industries — Fortune 300 industrial controls manufacturer, ~62,000 employees
IndustryManufacturing / industrial IoT
Records~340 GB of proprietary firmware source, PLC ladder logic, and three years of JIRA tickets — exfiltrated to a personal Dropbox Business tenant
VectorInsider — senior firmware engineer, 30-day notice period, used sanctioned Dropbox client on corporate laptop; DLP rules were configured for the old Box tenant after a 2024 migration and never re-baselined for Dropbox
AttributionNamed employee; FBI complaint filed
DisclosureSEC 8-K (Item 1.05) followed by press release; federal indictment unsealed 2026-04-10

What it means for practitioners

This is not a sophisticated attack story. It is a DLP hygiene story. A company migrated from Box to Dropbox in 2024, did not rewrite the Netskope / Zscaler policy that blocked *.box.com uploads, and for two years any employee could sync a 340 GB repo to a personal Dropbox over the corporate network at line rate. The exit-interview detection came from Git commit audit logs, not from the DLP stack. Your DLP stack is probably in the same shape.

What to do about it. First, generate a list of every SaaS storage domain your proxy allows — OneDrive, Dropbox, Google Drive, iCloud, Sync.com, pCloud, Mega — and compare it to the list of sanctioned corporate tenants. Anything on the allow list that is not your tenant is an exfil path. Second, for departing employees in code-adjacent roles, enforce a 30-day "read-only + monitor" mode on source-control and artifact repos the moment notice is given; this is a control Okta, Entra, and GitHub Enterprise all support but almost nobody turns on. Third, require device-bound tokens on any GitHub / GitLab access — personal access tokens on a personally-owned device should be architecturally impossible, not merely against policy.

The insider case is the hardest breach to defend because the user is authorized. But a 340 GB transfer over 11 days is not a stealth operation — it is a monitoring gap. If your SIEM cannot alert on "single user, outbound upload > 10 GB/day to a non-corporate domain," fix that before you buy another UEBA product.

Insider Manufacturing DLP IP theft
[Week of 2026-03-25] · Meridian Financial Partners Finance · ~2.4M client records
VictimMeridian Financial Partners — regional wealth management firm, $48B AUM
IndustryFinancial services / wealth management
Records~2.4M client records · names, account numbers, partial SSN, brokerage statement PDFs going back to 2019
VectorVuln-chain — internet-facing SSL-VPN appliance (Fortinet FortiGate, firmware 3 minor versions behind) exploited via pre-auth heap overflow, pivoted to Active Directory via cached service-account NTLM hash, exfil over DNS tunneling to avoid the web proxy
AttributionUnattributed; access broker post on a Russian-speaking forum 11 days before the intrusion
DisclosureSEC 8-K (Item 1.05); FINRA notice; state AG filings in NY, NJ, MA

What it means for practitioners

FortiGate, Ivanti Connect Secure, Citrix NetScaler, Palo Alto GlobalProtect. Pick any month in the last three years and one of them has been the entry point for a major breach. The pattern is always the same: appliance sits on the edge, runs a closed-source TLS stack, ships with a pre-auth memory-corruption bug once or twice a year. If your risk register does not treat internet-facing VPN concentrators as a distinct asset class with a 72-hour patch SLA, you are betting the firm on the research cadence of three vendors.

Concrete actions for finance IT leaders. First, inventory every edge appliance by model, firmware, and last-patch date, and post it on the CISO's wall. If the number of appliances more than 30 days behind is non-zero, that is the only metric that matters this week. Second, push Zero Trust Network Access (Cloudflare, Zscaler ZPA, Tailscale Enterprise) for any new remote-access use case; the SSL-VPN box should be on a decommissioning plan, not a refresh cycle. Third, and this is where most firms lose — audit every service account with domain-admin-equivalent privilege that authenticates via NTLM. If you still have NTLM enabled on domain controllers in 2026, you are accepting pass-the-hash as a risk.

The DNS exfil detail is the useful tell. Meridian had a web proxy but no DNS-layer egress control. If your outbound DNS can reach arbitrary upstream resolvers, you cannot claim data-exfil containment. Force all workstations through a logged recursive resolver (Umbrella, Quad9 Enterprise, Pi-hole + sinkhole lists, whatever) and alert on non-standard TXT volume. See first 24 hours of breach response for the triage order.

Vuln-chain Finance Fortinet VPN
[Week of 2026-03-18] · Hartwell County Revenue Dept. Government · 1.2M citizen records
VictimHartwell County Department of Revenue — U.S. county government, population ~950K
IndustryGovernment / municipal
Records1.2M citizen tax records · names, SSN, filed return PDFs (2020-2025), direct-deposit routing/account numbers
VectorMisconfig — AWS S3 bucket holding tax return backups set to "public-read" during a 2023 data-lake migration; never re-locked. Discovered by a security researcher via grayhatwarfare-style indexer and reported responsibly
AttributionNo malicious access confirmed from CloudTrail logs; exposure window of ~27 months
DisclosureState AG press conference; researcher disclosure post; county council hearing

What it means for practitioners

Public S3 buckets in 2026 is not an excusable story. AWS has shipped Block Public Access at the account level since 2018 and turned it on by default in 2023 for new buckets. This exposure means Hartwell either inherited buckets from before the default flip and never audited, or someone explicitly overrode BPA for a "temporary" migration. Both are organizational failures, not technical ones.

What to go do. First, in every AWS account you own, confirm that Block Public Access is enforced at the account level — not just the bucket level. Run aws s3control get-public-access-block --account-id ... across every account via Organizations and make the output a monthly auditor artifact. Second, for any bucket holding regulated data (tax, health, PCI), make S3 Object Lock with governance mode the default, not the exception; this would not have prevented the exposure but it would have prevented silent deletion of the audit trail. Third, the CloudTrail-based "no malicious access" claim depends entirely on whether CloudTrail S3 data events were enabled. They almost never are by default because the cost scares people. For sensitive buckets, turn them on and ship to a separate log archive account. Without data events, you are guessing.

The harder lesson for government IT: the researcher found this in a public index. So did everyone else who bothered to look. Assume that any S3 bucket that was ever public has been crawled, and plan your notification strategy accordingly. The 27-month exposure window means the breach happened in 2023; what you are experiencing now is the disclosure, not the compromise.

Misconfig Government AWS S3 Cloud posture
[Week of 2026-03-11] · Briarcliff Retail Group Retail · ~18M loyalty accounts
VictimBriarcliff Retail Group — specialty apparel chain, 2,100 stores across US/Canada
IndustryRetail
Records~18M loyalty accounts · email, phone, hashed password (bcrypt), purchase history, birth month, last-4 of payment card
VectorThird-party — breach originated at a marketing-automation vendor (invented: "Plumeria CDP"); attacker obtained a long-lived API key with read access to the full customer-data-platform export endpoint
AttributionUnattributed; data offered for sale on BreachForums successor
DisclosureVendor notified customer on 2026-03-09; Briarcliff 8-K filed 2026-03-12

What it means for practitioners

The retail pattern keeps repeating because marketing tech is the softest vendor tier any enterprise integrates with. The CMO signs a CDP contract, the data team wires a full customer export to the vendor's S3 bucket on a nightly job, security reviews the SOC 2 and moves on. Two years later the vendor has a breach and your 18M loyalty records are the headline. The CDP vendor was not the target — your data was.

Do these three things this quarter. First, pull every API key your marketing, analytics, and CDP vendors hold and confirm (a) scope is read-only and least-privilege, (b) rotation cadence is <= 90 days, (c) IP allowlisting is enforced server-side. The 18M number happened because one key with no rotation and no IP allowlist was valid for three years. Second, audit what you are actually sending to these vendors. In my experience, about 40% of the fields flowing to a typical CDP are not needed for the documented use case; you are shipping birth-year and last-4 because the ETL was copy-pasted, not designed. Minimize before you encrypt. Third, if your privacy/GRC function has not classified your marketing vendors as high-criticality third parties, escalate — they hold more PII than most of your actual business systems.

On consumer notification: bcrypt with a cost factor >= 12 is holding up fine, so the password hashes are a low-velocity concern. The high-velocity concern is the "birth month + email + purchase history" combo, which is enough to drive credible phishing against the loyalty base for the next 18 months. Tell your customers to expect it.

Third-party Retail Marketing tech API keys
[Week of 2026-02-25] · Nyxrate Exchange Crypto · ~$142M hot-wallet loss
VictimNyxrate Exchange — mid-tier centralized crypto exchange, Singapore-regulated, ~3.4M users
IndustryCrypto / financial services
RecordsNo customer PII exfiltrated; ~$142M in ETH, USDT, and SOL drained from 4 hot wallets over ~90 minutes
VectorMFA-bypass phishing of a wallet-operations engineer — Okta AiTM proxy page captured session cookie; attacker imported cookie into controlled browser, approved a pending withdrawal-policy change, then initiated outflows. TOTP was in use; no FIDO2
AttributionConsistent with Lazarus sub-cluster per on-chain tracing (Chainalysis, TRM Labs)
DisclosureVendor status page; Singapore MAS statement; on-chain forensics public within 6 hours

What it means for practitioners

In 2026, if your high-value operators authenticate with TOTP, you are choosing to be on this list eventually. Adversary-in-the-middle phishing kits (Evilginx, Muraena, and the commercial variants sold in Telegram channels) defeat TOTP and push-notification MFA with a single click from the user. The only thing they do not defeat is origin-bound FIDO2 / WebAuthn, because the cryptographic challenge is tied to the actual domain, not a visual one.

If you operate anything with custody — crypto exchange, fintech treasury, payment processor, stablecoin issuer — your wallet operators, SRE leads, and anyone who can approve an outbound transfer should be on hardware security keys with phishing-resistant auth enforced at the Okta/Entra policy layer, not at the user's discretion. Conditional access policy: require authenticator strength = phishingResistant for this application group. That is a one-line change in Entra ID. In Okta it is the "FIDO2 (WebAuthn)" factor under Authenticator Enrollment with a sign-on policy that excludes all other factors for the privileged group. Do it this week.

Second lesson: the withdrawal-policy change was approved by a single operator. Any movement of custody funds should require M-of-N hardware approval with a time-delay on policy mutations — this is how cold-storage custodians work, and hot-wallet operations should import the same pattern. Fireblocks, Copper, BitGo all support it; in-house wallet software almost never does. Third: Nyxrate's SIEM did not alert on the withdrawal-volume anomaly because the threshold was calibrated to normal daily outflow. Calibrate for 90-minute windows, not 24-hour windows, or the drain finishes before the SOC pages.

AiTM phishing Crypto Okta FIDO2
[Week of 2026-02-18] · Kestrel Forge Manufacturing Manufacturing · production halt 9 days
VictimKestrel Forge Manufacturing — mid-market auto-parts supplier, 14 plants, ~9,500 employees
IndustryManufacturing / automotive supply
RecordsPrimarily operational impact — 9-day halt across 6 plants. Data exfil ~280 GB (engineering drawings, HR, email archives); ransom demand $20M in XMR
VectorBlackCat-style ransomware via unpatched VMware vCenter (CVE-2026-21144, unauth RCE disclosed 2026-01-17); attacker reached vCenter because OT and IT VMware clusters shared the same management VLAN
AttributionRA World affiliate (self-claimed); BlackCat-derived locker binary
DisclosureSEC 8-K; customer letters to Tier 1 OEMs; statement via external counsel

What it means for practitioners

vCenter continues to be the most effective pivot point in enterprise networks because it holds the keys to every VM on the cluster and is rarely segmented from production admin networks. The January 2026 CVE had a public PoC within 10 days of disclosure. Kestrel had 40 days to patch and did not, because the OT organization owned the ESXi hosts running the plant-floor HMIs and the IT organization owned the vCenter, and nobody owned the cross-cutting patch decision.

If you run any manufacturing, utility, or logistics environment with OT under IT-owned virtualization, the ownership gap is the vulnerability. Concrete moves: first, publish a named accountable owner for every vCenter, Hyper-V, and Nutanix cluster — one human, not a team. Second, put vCenter management interfaces on a dedicated admin VLAN reachable only from a PAM jump host (CyberArk, Delinea, Teleport — pick one), and confirm that no plant-floor workstation can reach TCP 443 on any vCenter. Third, for OT specifically, run the IEC 62443 zone/conduit exercise even if you do not have to; the conversation forces the segmentation gaps into a diagram where the CFO can see them.

The $20M demand is a sideshow. The real cost for Kestrel will be the OEM contract penalties for missed JIT deliveries, which in automotive can exceed the ransom by an order of magnitude. If you supply a Tier 1, your contract almost certainly has clauses about production continuity that your security team has never read. Go read them. That is the number you are defending, not the ransom.

Ransomware Manufacturing VMware OT/IT

How I write these

Each breach entry is short by design — ~300 words, not a 5,000-word post-mortem. The target reader is a senior IT or security practitioner who will scan ten of these a month and remember the patterns. The goal is signal density over narrative.

Every entry includes: Victim · Industry · Records · Vector · Attribution · Takeaway. Where the primary source is an SEC 8-K, I link it. Where it's a vendor disclosure, I link that. Where there is no primary source and I'm reading attacker claims on leak sites, I flag it explicitly.