TWCERT warns of phishing campaigns abusing Microsoft 365, lookalike domains, and short-lived SSL certificates to evade defenses

Taiwan’s national CERT (TWCERT/CC) reports an active social-engineering campaign that combines legitimate Microsoft 365 email accounts, near-typosquat domains, and short-term SSL certificates to bypass email and web defenses. The activity includes two waves: (1) broad phishing emails themed as “Microsoft account abnormal sign-in activity” and (2) targeted spear-phishing that repeatedly sends “Microsoft one-time code” lures to create urgency before delivering an “abnormal sign-in” message. A notable tactic described by TWCERT is URL-pattern-based gating: victims who match attacker-defined URL rules see a customized phishing page that harvests credentials, while non-matching visitors are redirected to a legitimate login page—reducing detection and increasing credibility. Read more

Taiwan CERT flags “EtherHide” as an emerging blockchain-based C2 technique paired with ClearFake fake-update lures

Taiwan’s national CERT (TWCERT/CC) warns that attackers are increasingly using public blockchains as command-and-control (C2) infrastructure. The advisory highlights “EtherHide,” a technique first described by security researchers in October 2023, where adversaries store malicious commands or payload locations inside smart contracts. Malware (or malicious web scripts) can then query the chain for updated instructions, reducing the effectiveness of traditional controls like domain/IP blocking and traffic monitoring. TWCERT/CC also notes EtherHide is frequently chained with the “ClearFake” social-engineering pattern—fake system notifications or software update prompts—often delivered via compromised WordPress sites embedding malicious JavaScript. The combined flow uses Binance Smart Chain (BSC) smart contracts and read-only calls (e.g., eth_call) to retrieve attacker instructions without on-chain transaction fees, improving stealth and persistence. Read more

Contagious Interview evolves: attackers abuse VS Code Tasks to auto-run malware when a “trusted” workspace is opened

Taiwan’s TWCERT/CC reports a technical evolution in the “Contagious Interview” campaign: instead of relying on victims to manually execute a file, attackers embed a malicious VS Code workspace configuration so code runs automatically when developers open a project folder in Trusted Mode. The technique abuses VS Code’s tasks.json automation (including a run-on-folder-open behavior) and social engineering around Workspace Trust prompts. The activity primarily targets cryptocurrency software engineers and freelancers via recruiting outreach on LinkedIn and gig platforms, then directs them to download test projects from GitHub/GitLab. TWCERT/CC says the resulting payload has been identified as a newer BeaverTail variant (Type 701), with noted functional overlap with OtterCookie (sometimes referred to as “OtterCandy”), and is focused on stealing crypto-related browser extension and wallet data as well as high-value browser-stored secrets. Read more

Okta’s support-system intrusion highlights why HAR files and session tokens must be treated as privileged secrets

Okta’s root-cause report says a threat actor accessed files in its customer support case management system from Sept. 28 to Oct. 17, 2023, affecting 134 customers (under 1%). Some accessed files were HAR files containing session tokens, enabling session hijacking; Okta says tokens were used to hijack sessions for 5 customers. The incident stemmed from a support-system service account credential that was likely exposed after being saved to an employee’s personal Google account via Chrome sign-in on an Okta-managed laptop. Okta also disclosed a logging visibility gap that delayed identifying file downloads until an IP indicator was shared by BeyondTrust. Read more

Microsoft’s Secure Future Initiative: a multi-year, hyperscaler-scale reset on how Microsoft builds and operates security

Microsoft’s Secure Future Initiative (SFI), launched in November 2023, is a multi-year, cross-company program intended to “increasingly secure” how Microsoft designs, builds, tests, and operates its products and services. Microsoft says the first year prioritized security across the company through internal training and substantial engineering investment to reduce risk. SFI is structured around security principles (innovate, implement, guide) and six engineering pillars mapped to Zero Trust principles and the NIST Cybersecurity Framework, signaling a governance-and-engineering approach rather than a point-product response. For global cloud, identity, and security teams, SFI matters because it describes Microsoft’s internal hardening focus areas—identity and secrets, tenant isolation, network segmentation, SDLC/build integrity, unified detection, and faster remediation—that can influence default configurations, platform controls, and operational expectations across Microsoft’s cloud and software ecosystem over time. Microsoft also publishes periodic SFI progress reports (including references to a November 2025 report and earlier updates), indicating the initiative is intended to be measured and iterated in “waves” as threats evolve. Read more

OWASP formalizes a shared security baseline for GenAI apps with the Top 10 for LLM Applications (now part of the broader GenAI Security Project)

OWASP’s Top 10 for Large Language Model (LLM) Applications has been published as a community security baseline that catalogs common failure modes in GenAI applications—ranging from prompt injection to model theft. OWASP says the effort has expanded beyond a list into the OWASP GenAI Security Project, a broader open initiative covering risks across LLMs, agentic systems, and AI-driven applications, with a large global contributor community and separate project resources and participation tracks. Read more

Google’s SAIF reframed AI security as operational controls, not just model research

Google introduced the Secure AI Framework (SAIF) in June 2023 as a conceptual security framework for AI systems, explicitly mapping AI-specific threats (e.g., model theft, data poisoning, prompt injection, and training-data leakage) to familiar security disciplines such as secure-by-default infrastructure, detection and response, automation, consistent platform controls, continuous testing/feedback loops, and end-to-end risk assessment. While SAIF is not a standard, Google positioned it as a bridge between traditional security programs and emerging AI risks, and tied it to ongoing industry work including NIST’s AI Risk Management Framework and ISO/IEC 42001. Read more

NIST AI RMF: the U.S. government’s voluntary baseline for AI trust, security, and resilience—now expanding to generative AI and critical infrastructure

NIST’s AI Risk Management Framework (AI RMF) established a shared, voluntary vocabulary and process model for managing AI risks across the lifecycle—supporting “trustworthiness” goals such as safety, security, and resilience. Since the AI RMF 1.0 release on Jan. 26, 2023, NIST has expanded implementation support via the AI RMF Playbook and Resource Center, published a Generative AI Profile (NIST-AI-600-1) in July 2024, and, as of Apr. 7, 2026, issued a concept note for a forthcoming profile focused on Trustworthy AI in Critical Infrastructure—signaling growing expectations that AI governance and security controls will be tailored to high-consequence environments. Read more

CrowdStrike publishes RCA for July 2024 “Channel File 291” Windows sensor outage, reframing update resilience as a board-level risk

CrowdStrike released a root-cause analysis (RCA) and executive summary for the July 19, 2024 “Channel File 291” incident, in which a content configuration update delivered via channel files for its Windows sensor triggered a widespread outage. The company says the specific scenario is now incapable of recurring and outlines mitigations and process improvements intended to enhance resilience. CrowdStrike also reported that by July 29, 2024 at 8:00 p.m. EDT, approximately 99% of Windows sensors were back online, which it compares to a typical ~1% week-over-week variance in sensor connections. Read more

UNC5537’s Snowflake data-theft campaign made SaaS identity controls a first-order data platform risk

Mandiant (Google Cloud) reported a financially motivated cluster, UNC5537, systematically accessing Snowflake customer instances using stolen credentials—then stealing data and pursuing extortion and resale. Mandiant says it found no evidence the activity originated from a breach of Snowflake’s own enterprise environment; incidents it investigated traced back to compromised customer credentials, often sourced from historical infostealer infections dating to 2020. The campaign’s success, per Mandiant, was strongly associated with missing MFA, long-lived unrotated credentials, and lack of network allow lists—shifting the security conversation from “SaaS breach” to “identity hygiene as data-platform blast radius.” Read more