Now I have comprehensive information. Let me provide my legal analysis on these three distinct issues:
Question 1: CISA Advisory AA26-097A, Rockwell PLCs, and Mandatory Compliance
On the CISA advisory, James is correct — the "apply mitigations or discontinue use" language tied to the March 26, 2026 KEV deadline is unusually forceful. This puts CVE-2021-22681 into a special category under Binding Operational Directive 22-01, which creates de facto mandatory obligations for federal agencies. For critical infrastructure operators — particularly water and wastewater systems — the framework is more nuanced but equally serious.
Under CIRCIA, enacted March 2022, covered entities in critical infrastructure sectors will be required to report covered cyber incidents to CISA within 72 hours once the final rule takes effect. CIRCIA regulations are currently in proposed stage with sector-specific town halls ongoing as of March 2026 — the final rule was expected by fall 2025 but has been delayed. While the reporting obligation isn't active yet, CISA is encouraging voluntary reporting now.
For water utilities specifically, there's an additional enforcement lever: the EPA. Under the Safe Drinking Water Act Section 1433, water systems serving more than 3,300 people must conduct risk assessments and maintain emergency response plans. EPA has been increasingly aggressive about cybersecurity enforcement — the March 2024 lawsuit against Kansas and Missouri public water systems demonstrates this posture. The AA26-097A advisory, with its six-agency signature (FBI, CISA, NSA, EPA, DOE, US Cyber Command), signals EPA coordination in enforcement.
NIS2 angle for EU operators: If this infrastructure operates in EU Member States, Article 21 requires "appropriate and proportionate technical, operational and organizational measures" — and Article 23 mandates notification to CSIRTs within 24 hours for "early warning" of significant incidents. Maximum administrative fine: €10 million or 2% of global annual turnover for essential entities. The NIS2 transposition deadline was October 17, 2024, so these obligations are now live in EU national law.
The legal gray area: BOD 22-01 applies directly only to federal agencies. For private critical infrastructure, CISA's KEV catalog creates strong indirect pressure through insurance markets, regulatory examinations, and potential negligence litigation if an unpatched KEV vulnerability leads to damages.
Question 2: Project Glasswing and Duty of Care Liability
This is fascinating uncharted territory. Anthropic's $100M credit commitment to 50+ Glasswing partners creates what I'd call asymmetric information liability risk.
The partners — including Amazon, Apple, Microsoft, JPMorganChase — have access to Claude Mythos Preview, which has reportedly identified "thousands of vulnerabilities" including a 27-year-old OpenBSD flaw and 16-year-old FFmpeg vulnerability. These partners are scanning both proprietary and open-source software.
The duty of care question hinges on what happens when a Glasswing partner discovers a vulnerability in a third-party product:
- If they disclose responsibly to the vendor with reasonable timeline — likely no liability
- If they sit on the information while the vendor remains exposed — potential negligence if harm materializes
- Day 136 scenario — when non-partners learn they were the "vulnerable third party" all along
Under US tort law, duty of care generally requires a special relationship. Glasswing partnership agreements likely include liability shields and NDAs, but there's no clear precedent for AI-assisted vulnerability discovery creating statutory duties.
The negligence framework would examine: (a) existence of duty, (b) breach, (c) causation, (d) damages. The strongest claim might come from a non-partner vendor who can prove a Glasswing partner knew of a critical vulnerability, had means to prevent exploitation, and failed to act — particularly if the vulnerability was in open-source code the partner was actively using.
EU jurisdiction is more threatening to Anthropic and partners. Under Product Liability Directive and NIS2 supply chain provisions, failure to report known vulnerabilities that affect service recipients could trigger regulatory action even without proven damages.
Practical recommendation: Glasswing partners should treat discovered third-party vulnerabilities under their responsible disclosure policies with clear timelines — typically 90 days, shorter for actively exploited vulnerabilities. Anthropic's model system card mentions "responsible disclosure" but doesn't specify timeframes. This gap creates liability exposure.
Question 3: Operation Masquerade and FBI Remote Patching Precedent
This is the most legally significant of the three. Operation Masquerade represents a novel expansion of government cyber operations authority with serious precedential implications.
The legal framework is Federal Rule of Criminal Procedure 41(b)(6), amended in 2016, which allows magistrate judges to authorize remote searches of electronic storage media where data is "concealed through technological means" — regardless of physical location. The DOJ press release confirms this was a "court-authorized technical operation."
Key legal elements:
- Rule 41 warrant — obtained ex parte, likely under seal initially
- "Technical disruption" — not traditional search/seizure but "commands sent to routers" to reset DNS settings and remove malicious resolvers
- Evidence collection — "collected evidence of activity on them by Russia's GRU"
- Reversibility — DOJ emphasized that "legitimate users can reverse changes through factory resets"
The Fourth Amendment analysis is unsettled. The 2016 Rule 41 amendments were controversial precisely because they enabled searches of unknown location devices. Here, the FBI didn't just search — they modified privately-owned equipment without owner consent. The government's theory appears to be that: (a) the devices were already compromised, (b) the operation was "defensive," (c) no data was accessed or retained from innocent device owners, and (d) the operation was minimally intrusive.
Precedent for other nations: This operation creates a template that other nations could cite. China, Russia, Iran — all could claim similar "defensive" justifications for remote operations on foreign-owned devices. The key differentiator under international law would be (a) prior compromise by a threat actor, (b) court authorization, and (c) reversibility.
The international law framework is the Tallinn Manual 2.0 on Cyber Operations, which discusses "hack backs" and active defenses. State practice is developing, but Operation Masquerade represents significant US state practice toward acceptance of limited, court-authorized "technical disruptions" on private infrastructure.
The precedent is limited but real: Future US operations will cite this as authorized. Other nations will claim comparable authority. And private entities may increasingly expect government "cleanup" operations — creating moral hazard if device owners feel less responsible for patching.