The document recommends guidelines for providers of any systems that use artificial intelligence (AI), whether those systems have been created from scratch or built on top of tools and services provided by others. Implementing these guidelines will help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorized parties.

The guidelines identify the four essential life cycle areas of AI system development. Each section contains suggestions and mitigations to help reduce the overall risk to an organizational AI system development process.

1. Secure design This section contains guidelines for the design stage of the life cycle. It covers understanding risks and threat modeling for specific topics and trade-offs in system and model design.

2. Secure development This section contains guidelines that apply to the development stage of the life cycle, including supply chain security, documentation, and asset and technical debt management.

3. Secure deployment This section contains guidelines that apply to the deployment stage of the life cycle, including protecting infrastructure and models from compromise, threat, or loss, developing incident management processes, and responsible release.

4. Secure operation and maintenance This section contains guidelines for the security operation and maintenance stage of the life cycle. It provides guidelines on particularly relevant actions once a system has been deployed, including logging and monitoring, update management, and information sharing.