Across the wire.
Expand a month to see the week-by-week entries. Each entry links to the primary disclosure and includes my 200-400 word take on what it means for anyone defending a similar stack.
[Week of 2026-04-15] · Delta Medical Network
What it means for practitioners
If you run a hospital system and you are still operating with Epic recovery that lives on the same VMware cluster as production, this breach is an indictment of your backup architecture, not a surprise attack. The Citrix entry point is a recurring story — the appliance was a patch behind, and the attackers almost certainly got in through something that had been in CISA KEV for months. I would not waste time on the ransomware narrative. The real question is why an internet-facing NetScaler could reach ESXi management on TCP 443 without a jump host, and why the backup repository credentials were domain-joined.
Three things to audit this week. First, pull your NetScaler, Ivanti, Fortinet, and Palo Alto appliance inventory against KEV and confirm patch-lag in days, not weeks — see KEV as backlog for how I structure that list. Second, confirm that your Veeam, Rubrik, or Cohesity proxies cannot be authenticated-to with the same service account that runs daily domain operations; if the ransomware operator pivots from vCenter to backup in one hop, your RTO is fiction. Third, if you are in healthcare, run the tabletop where Epic Hyperspace is offline for 72 hours and your downtime procedures are paper — most clinical directors I have talked to have not rehearsed past hour 12.
The OCR fine will land in 2027. The operational cost — diverted ambulances, cancelled surgeries, locum staffing — is already in the tens of millions. If your cyber-insurance renewal is this quarter, expect the carrier to ask for proof of immutable backups and out-of-band recovery networks. Have the diagrams ready.
[Week of 2026-04-08] · Corestack Cloud
@corestack/agent-sdk published to npm after typosquat on a maintainer 2FA recovery email; postinstall script exfiltrated env vars to Cloudflare Worker under attacker controlWhat it means for practitioners
The failure mode here is the one I keep flagging in vendor reviews: the maintainer account had 2FA, but the recovery email still pointed at a personal Gmail from 2019. npm token compromise is the oldest story in the supply-chain playbook, and it still works because most SaaS shops do not gate publish with hardware-backed attestation. If your build pipeline installs anything via npm or pip without a lockfile pin plus a cosigned provenance check, you are one typosquat away from being entry 201 on this list.
For defenders, the concrete move this week is to pull every CI/CD runner's outbound egress log for the last 30 days and look for calls to newly-registered Cloudflare Workers, Vercel functions, and ngrok tunnels. That is where postinstall beaconing hides. Second, rotate every long-lived cloud credential that sat in an environment variable on a build runner — if you cannot produce the rotation timestamp for your GitHub Actions AWS_ACCESS_KEY_ID in the last 90 days, that is your remediation. Third, this is exactly the pattern supply-chain breach — your responsibilities is about: you do not get to point at the vendor. Your AWS root account got touched because you shipped the agent.
On the vendor side, Corestack did two things right and one thing wrong. Right: they pulled the package within 4 hours of the SBOM diff alert firing, and they published per-customer IOC lists (Worker hostnames, hashes). Wrong: their incident page still says "no customer data accessed" while simultaneously telling customers to rotate AWS keys. Those statements cannot both be true. Read the disclosure skeptically.
[Week of 2026-04-01] · Anvilford Industries
What it means for practitioners
This is not a sophisticated attack story. It is a DLP hygiene story. A company migrated from Box to Dropbox in 2024, did not rewrite the Netskope / Zscaler policy that blocked *.box.com uploads, and for two years any employee could sync a 340 GB repo to a personal Dropbox over the corporate network at line rate. The exit-interview detection came from Git commit audit logs, not from the DLP stack. Your DLP stack is probably in the same shape.
What to do about it. First, generate a list of every SaaS storage domain your proxy allows — OneDrive, Dropbox, Google Drive, iCloud, Sync.com, pCloud, Mega — and compare it to the list of sanctioned corporate tenants. Anything on the allow list that is not your tenant is an exfil path. Second, for departing employees in code-adjacent roles, enforce a 30-day "read-only + monitor" mode on source-control and artifact repos the moment notice is given; this is a control Okta, Entra, and GitHub Enterprise all support but almost nobody turns on. Third, require device-bound tokens on any GitHub / GitLab access — personal access tokens on a personally-owned device should be architecturally impossible, not merely against policy.
The insider case is the hardest breach to defend because the user is authorized. But a 340 GB transfer over 11 days is not a stealth operation — it is a monitoring gap. If your SIEM cannot alert on "single user, outbound upload > 10 GB/day to a non-corporate domain," fix that before you buy another UEBA product.
[Week of 2026-03-25] · Meridian Financial Partners
What it means for practitioners
FortiGate, Ivanti Connect Secure, Citrix NetScaler, Palo Alto GlobalProtect. Pick any month in the last three years and one of them has been the entry point for a major breach. The pattern is always the same: appliance sits on the edge, runs a closed-source TLS stack, ships with a pre-auth memory-corruption bug once or twice a year. If your risk register does not treat internet-facing VPN concentrators as a distinct asset class with a 72-hour patch SLA, you are betting the firm on the research cadence of three vendors.
Concrete actions for finance IT leaders. First, inventory every edge appliance by model, firmware, and last-patch date, and post it on the CISO's wall. If the number of appliances more than 30 days behind is non-zero, that is the only metric that matters this week. Second, push Zero Trust Network Access (Cloudflare, Zscaler ZPA, Tailscale Enterprise) for any new remote-access use case; the SSL-VPN box should be on a decommissioning plan, not a refresh cycle. Third, and this is where most firms lose — audit every service account with domain-admin-equivalent privilege that authenticates via NTLM. If you still have NTLM enabled on domain controllers in 2026, you are accepting pass-the-hash as a risk.
The DNS exfil detail is the useful tell. Meridian had a web proxy but no DNS-layer egress control. If your outbound DNS can reach arbitrary upstream resolvers, you cannot claim data-exfil containment. Force all workstations through a logged recursive resolver (Umbrella, Quad9 Enterprise, Pi-hole + sinkhole lists, whatever) and alert on non-standard TXT volume. See first 24 hours of breach response for the triage order.
[Week of 2026-03-18] · Hartwell County Revenue Dept.
What it means for practitioners
Public S3 buckets in 2026 is not an excusable story. AWS has shipped Block Public Access at the account level since 2018 and turned it on by default in 2023 for new buckets. This exposure means Hartwell either inherited buckets from before the default flip and never audited, or someone explicitly overrode BPA for a "temporary" migration. Both are organizational failures, not technical ones.
What to go do. First, in every AWS account you own, confirm that Block Public Access is enforced at the account level — not just the bucket level. Run aws s3control get-public-access-block --account-id ... across every account via Organizations and make the output a monthly auditor artifact. Second, for any bucket holding regulated data (tax, health, PCI), make S3 Object Lock with governance mode the default, not the exception; this would not have prevented the exposure but it would have prevented silent deletion of the audit trail. Third, the CloudTrail-based "no malicious access" claim depends entirely on whether CloudTrail S3 data events were enabled. They almost never are by default because the cost scares people. For sensitive buckets, turn them on and ship to a separate log archive account. Without data events, you are guessing.
The harder lesson for government IT: the researcher found this in a public index. So did everyone else who bothered to look. Assume that any S3 bucket that was ever public has been crawled, and plan your notification strategy accordingly. The 27-month exposure window means the breach happened in 2023; what you are experiencing now is the disclosure, not the compromise.
[Week of 2026-03-11] · Briarcliff Retail Group
What it means for practitioners
The retail pattern keeps repeating because marketing tech is the softest vendor tier any enterprise integrates with. The CMO signs a CDP contract, the data team wires a full customer export to the vendor's S3 bucket on a nightly job, security reviews the SOC 2 and moves on. Two years later the vendor has a breach and your 18M loyalty records are the headline. The CDP vendor was not the target — your data was.
Do these three things this quarter. First, pull every API key your marketing, analytics, and CDP vendors hold and confirm (a) scope is read-only and least-privilege, (b) rotation cadence is <= 90 days, (c) IP allowlisting is enforced server-side. The 18M number happened because one key with no rotation and no IP allowlist was valid for three years. Second, audit what you are actually sending to these vendors. In my experience, about 40% of the fields flowing to a typical CDP are not needed for the documented use case; you are shipping birth-year and last-4 because the ETL was copy-pasted, not designed. Minimize before you encrypt. Third, if your privacy/GRC function has not classified your marketing vendors as high-criticality third parties, escalate — they hold more PII than most of your actual business systems.
On consumer notification: bcrypt with a cost factor >= 12 is holding up fine, so the password hashes are a low-velocity concern. The high-velocity concern is the "birth month + email + purchase history" combo, which is enough to drive credible phishing against the loyalty base for the next 18 months. Tell your customers to expect it.
[Week of 2026-02-25] · Nyxrate Exchange
What it means for practitioners
In 2026, if your high-value operators authenticate with TOTP, you are choosing to be on this list eventually. Adversary-in-the-middle phishing kits (Evilginx, Muraena, and the commercial variants sold in Telegram channels) defeat TOTP and push-notification MFA with a single click from the user. The only thing they do not defeat is origin-bound FIDO2 / WebAuthn, because the cryptographic challenge is tied to the actual domain, not a visual one.
If you operate anything with custody — crypto exchange, fintech treasury, payment processor, stablecoin issuer — your wallet operators, SRE leads, and anyone who can approve an outbound transfer should be on hardware security keys with phishing-resistant auth enforced at the Okta/Entra policy layer, not at the user's discretion. Conditional access policy: require authenticator strength = phishingResistant for this application group. That is a one-line change in Entra ID. In Okta it is the "FIDO2 (WebAuthn)" factor under Authenticator Enrollment with a sign-on policy that excludes all other factors for the privileged group. Do it this week.
Second lesson: the withdrawal-policy change was approved by a single operator. Any movement of custody funds should require M-of-N hardware approval with a time-delay on policy mutations — this is how cold-storage custodians work, and hot-wallet operations should import the same pattern. Fireblocks, Copper, BitGo all support it; in-house wallet software almost never does. Third: Nyxrate's SIEM did not alert on the withdrawal-volume anomaly because the threshold was calibrated to normal daily outflow. Calibrate for 90-minute windows, not 24-hour windows, or the drain finishes before the SOC pages.
[Week of 2026-02-18] · Kestrel Forge Manufacturing
What it means for practitioners
vCenter continues to be the most effective pivot point in enterprise networks because it holds the keys to every VM on the cluster and is rarely segmented from production admin networks. The January 2026 CVE had a public PoC within 10 days of disclosure. Kestrel had 40 days to patch and did not, because the OT organization owned the ESXi hosts running the plant-floor HMIs and the IT organization owned the vCenter, and nobody owned the cross-cutting patch decision.
If you run any manufacturing, utility, or logistics environment with OT under IT-owned virtualization, the ownership gap is the vulnerability. Concrete moves: first, publish a named accountable owner for every vCenter, Hyper-V, and Nutanix cluster — one human, not a team. Second, put vCenter management interfaces on a dedicated admin VLAN reachable only from a PAM jump host (CyberArk, Delinea, Teleport — pick one), and confirm that no plant-floor workstation can reach TCP 443 on any vCenter. Third, for OT specifically, run the IEC 62443 zone/conduit exercise even if you do not have to; the conversation forces the segmentation gaps into a diagram where the CFO can see them.
The $20M demand is a sideshow. The real cost for Kestrel will be the OEM contract penalties for missed JIT deliveries, which in automotive can exceed the ransom by an order of magnitude. If you supply a Tier 1, your contract almost certainly has clauses about production continuity that your security team has never read. Go read them. That is the number you are defending, not the ransom.
How I write these
Each breach entry is short by design — ~300 words, not a 5,000-word post-mortem. The target reader is a senior IT or security practitioner who will scan ten of these a month and remember the patterns. The goal is signal density over narrative.
Every entry includes: Victim · Industry · Records · Vector · Attribution · Takeaway. Where the primary source is an SEC 8-K, I link it. Where it's a vendor disclosure, I link that. Where there is no primary source and I'm reading attacker claims on leak sites, I flag it explicitly.