
The European Telecommunications Standards Institute (ETSI) has released a new technical specification, TS 104 223, titled Securing Artificial Intelligence (SAI); Baseline Cyber Security Requirements for AI Models and Systems. This framework establishes an international benchmark for securing AI across its lifecycle, addressing unique threats like data poisoning and adversarial attacks1. The standard aligns with the UK’s AI Code of Practice but omits acknowledgment of the UK’s foundational role, sparking industry debate2.
Key Components of ETSI’s AI Security Standard
The TS 104 223 specification outlines 13 core principles, expanded into 72 trackable requirements, covering five lifecycle phases: secure design, development, deployment, maintenance, and end-of-life1. It targets developers, vendors, and operators, emphasizing mitigations for AI-specific threats such as model obfuscation and indirect prompt injection. ETSI’s approach mirrors its earlier IoT security standard (EN 303 645), applying similar principles like encryption and access control to AI systems1.
Controversy and Global Alignment
While ETSI positions the standard as a global benchmark, critics note its overlap with the UK’s February 2025 AI Code of Practice, which proposed identical lifecycle phases and principles2. The omission of the UK’s contribution raises questions about transparency in international standardization efforts. The framework also complements parallel initiatives like the EU AI Act and NIST’s AI Risk Management Framework, though enforcement mechanisms remain undefined1.
Implementation and SME Support
ETSI plans to release supplementary case studies to assist small and medium enterprises (SMEs) in adopting the standard. Scott Cadzow, ETSI SAI Chair, stated:
“Security must be a core requirement throughout the AI lifecycle, not an afterthought”1.
The lack of certification processes, however, may limit adoption among enterprises requiring compliance validation.
Relevance to Security Professionals
The standard’s lifecycle focus provides actionable guidance for securing AI deployments. Key takeaways include:
- Red Teams: Adversarial attack simulations should incorporate data poisoning and model evasion techniques outlined in the standard.
- Blue Teams: Monitoring for anomalous model behavior (e.g., unexpected output drift) aligns with the maintenance phase requirements.
- Threat Researchers: The framework’s threat taxonomy (e.g., indirect prompt injection) offers a structured approach to vulnerability discovery.
ETSI’s work reflects growing recognition of AI-specific risks, though its effectiveness will depend on adoption by regulators and tooling vendors. Future updates may address enforcement and interoperability with regional laws like the EU AI Act.
References
- ETSI Press Release: Technical Specification Sets International Benchmark for Securing Artificial Intelligence. Published 2025-04-22.
- Infosecurity Magazine: ETSI Baseline Requirements for AI Security. Published 2025-04-23.
- SecurityBrief UK: ETSI Sets Global Baseline for AI Cyber Security with New Standard.
- Infosecurity Magazine Twitter Thread on ETSI Standard Release.