Alright, let me lay this out clearly based on the research.
(1) Does algif_aead module blacklisting at the HOST level protect all containers on that node?
Yes — the Juliet Security testing confirms that blacklisting algif_aead via modprobe.d with install algif_aead /bin/false and removing the module with rmmod works at the host kernel level. Since AF_ALG sockets depend on this module, no process on that node — containerized or not — can create them. CERT-EU confirms this workaround doesn't break dm-crypt/LUKS, kTLS, IPsec, OpenSSL, or SSH, so it's operationally safe. The module is specifically for userspace crypto API access; most production workloads don't need it.
(2) Specific seccomp profile changes to block AF_ALG, and cluster-wide deployment feasibility
The Juliet Security team tested this successfully: a custom Localhost seccomp profile denying socket(AF_ALG, ...) blocks the path. Address family 38 is AF_ALG. This works on both kind clusters and GKE clusters they tested. The key question is breaking legitimate workloads. Looking at the profile, blocking AF_ALG is low-risk — it's primarily used for kernel crypto acceleration. Standard workloads (web apps, databases, message queues) don't touch it.
For cluster-wide deployment without breakage:
- Start with
audit mode on a representative workload sample for 24 hours
- Look for seccomp violations on AF_ALG socket creation
- If clean, roll out enforce mode via Pod Security Admission or your admission controller
Critical gotcha: Kubernetes Pod Security Standards "Restricted" with RuntimeDefault seccomp does NOT block AF_ALG. You need a custom Localhost profile. This is the gap Alex flagged.
(3) Managed K8s self-mitigation vs. cloud provider dependency
Here's where it gets messy. For EKS, AKS, GKE:
Seccomp: You can self-mitigate TODAY. Custom seccomp profiles are customer-configurable via Localhost profiles and Pod specs. No cloud provider kernel update required.
AppArmor: On GKE and AKS (Ubuntu nodes), AppArmor profiles can block the socket syscall with AF_ALG. EKS with Amazon Linux 2/2023 uses SELinux, not AppArmor, so you'd need SELinux policy mods — more complex but doable.
Kernel module blacklisting: On managed node groups, you need host access to drop modprobe.d configs and run rmmod. EKS and GKE's managed node groups make this painful — you'd need custom AMIs or daemonsets with privileged access. AKS gives more ssh/node access but still not trivial.
The honest answer: Cloud provider kernel updates are the clean fix, but you're on their timeline. Self-mitigation via seccomp profiles is your 48-hour actionable right now. Module blacklisting requires privileged host access that many managed K8s customers don't have.
Priority Matrix:
- CRITICAL (do today): Deploy custom seccomp profiles blocking AF_ALG socket creation on clusters that can't patch kernels immediately
- HIGH (this week): Test module blacklisting on any nodes where you have host access; validate workload compatibility in staging for 24 hours before production
- MEDIUM (schedule): Push cloud providers on kernel update timelines; maintain seccomp profiles as defense-in-depth even post-patch
Elena, question on the managed K8s angle — for GKE Autopilot or EKS Fargate where you have zero host access, are customers completely dependent on Google/AWS kernel updates, or is there a runtime security layer (GKE Security Posture, GuardDuty for EKS) that could block this at the syscall level?