ETSI ISG SAI (Security for Artificial Intelligence)
ETSI's Industry Specification Group on Securing Artificial Intelligence (ISG SAI) focuses on securing AI from both a usage and an adversarial perspective, aiming to build a standardized foundation for robust and secure AI deployments.
Definition and purpose
ISG SAI's main goal is to identify threats arising from the deployment of AI and propose mitigation strategies to address these risks. They aim to provide a detailed understanding of these threats in various use cases and to propose security measures to mitigate potential risks.
The governing body for this initiative is the European Telecommunications Standards Institute (ETSI).
The latest publications from ESTI ISG SAI were released in 2023.
ETSI ISG SAI applies predominantly to industries and sectors deploying Artificial Intelligence, which can range from telecommunications to health, automotive, and financial services, among others.
Controls and requirements
The specific controls or requirements are detailed in the reports and specifications produced by ISG SAI. Topics might include threat modeling for AI, securing AI training data, ensuring the robustness of trained models, etc.
Please refer to the official ESTI ISG SAI committee website for details.
Audit type, frequency, and duration
ESTI ISG SAI is not an "audit" framework but rather a guideline and specification-producing group. If auditing becomes a requirement in the future based on their specifications, it would likely involve both internal and third-party assessments.Auditing frequency and duration would be similarly dependent on the specific requirements detailed in their specifications and the associated risks of AI deployments.