OWASP formalizes a shared security baseline for GenAI apps with the Top 10 for LLM Applications (now part of the broader GenAI Security Project)

OWASP’s Top 10 for Large Language Model (LLM) Applications has been published as a community security baseline that catalogs common failure modes in GenAI applications—ranging from prompt injection to model theft. OWASP says the effort has expanded beyond a list into the OWASP GenAI Security Project, a broader open initiative covering risks across LLMs, agentic systems, and AI-driven applications, with a large global contributor community and separate project resources and participation tracks. Read more

Google’s SAIF reframed AI security as operational controls, not just model research

Google introduced the Secure AI Framework (SAIF) in June 2023 as a conceptual security framework for AI systems, explicitly mapping AI-specific threats (e.g., model theft, data poisoning, prompt injection, and training-data leakage) to familiar security disciplines such as secure-by-default infrastructure, detection and response, automation, consistent platform controls, continuous testing/feedback loops, and end-to-end risk assessment. While SAIF is not a standard, Google positioned it as a bridge between traditional security programs and emerging AI risks, and tied it to ongoing industry work including NIST’s AI Risk Management Framework and ISO/IEC 42001. Read more

NIST AI RMF: the U.S. government’s voluntary baseline for AI trust, security, and resilience—now expanding to generative AI and critical infrastructure

NIST’s AI Risk Management Framework (AI RMF) established a shared, voluntary vocabulary and process model for managing AI risks across the lifecycle—supporting “trustworthiness” goals such as safety, security, and resilience. Since the AI RMF 1.0 release on Jan. 26, 2023, NIST has expanded implementation support via the AI RMF Playbook and Resource Center, published a Generative AI Profile (NIST-AI-600-1) in July 2024, and, as of Apr. 7, 2026, issued a concept note for a forthcoming profile focused on Trustworthy AI in Critical Infrastructure—signaling growing expectations that AI governance and security controls will be tailored to high-consequence environments. Read more

CrowdStrike publishes RCA for July 2024 “Channel File 291” Windows sensor outage, reframing update resilience as a board-level risk

CrowdStrike released a root-cause analysis (RCA) and executive summary for the July 19, 2024 “Channel File 291” incident, in which a content configuration update delivered via channel files for its Windows sensor triggered a widespread outage. The company says the specific scenario is now incapable of recurring and outlines mitigations and process improvements intended to enhance resilience. CrowdStrike also reported that by July 29, 2024 at 8:00 p.m. EDT, approximately 99% of Windows sensors were back online, which it compares to a typical ~1% week-over-week variance in sensor connections. Read more

UNC5537’s Snowflake data-theft campaign made SaaS identity controls a first-order data platform risk

Mandiant (Google Cloud) reported a financially motivated cluster, UNC5537, systematically accessing Snowflake customer instances using stolen credentials—then stealing data and pursuing extortion and resale. Mandiant says it found no evidence the activity originated from a breach of Snowflake’s own enterprise environment; incidents it investigated traced back to compromised customer credentials, often sourced from historical infostealer infections dating to 2020. The campaign’s success, per Mandiant, was strongly associated with missing MFA, long-lived unrotated credentials, and lack of network allow lists—shifting the security conversation from “SaaS breach” to “identity hygiene as data-platform blast radius.” Read more

AWS frames “AI sovereignty” as control-and-choice across the AI stack, highlighting Nitro isolation, Bedrock data-use commitments, and sovereign deployment options

In a Security Blog post, AWS outlines how it approaches “AI sovereignty” as an extension of digital sovereignty, centered on data sovereignty (including residency and operator access restrictions) and operational sovereignty (including resilience and independence). AWS positions its sovereignty offering as “control and choice” across the AI stack—deployment location options (including on-premises and isolated deployments), model/service selection, and governance controls. The post highlights AWS Nitro System isolation properties for EC2 instances (including AI accelerator instances), a commitment that Amazon Bedrock customer inputs/outputs are not used to train Amazon Nova or third-party models, and references third-party validation of Nitro’s design by NCC Group. AWS also notes its ISO/IEC 42001 accredited certification coverage for certain AI services and a 2025 surveillance audit with no findings, framing these as assurance mechanisms for customers with sovereignty and compliance requirements. Read more