Canada signs global guidelines on secure AI development

More than 18 countries signed the guidelines developed by the U.K and U.S.

Canada is part of the more than a dozen countries that signed a document outlining guidelines for developing systems that use artificial intelligence (AI).

The guidelines focus on four areas of the development life cycle of an AI system: design, development, deployment, secure operation and maintenance.

“AI systems have the potential to bring many benefits to society. However, for the opportunities of AI to be fully realized, it must be developed, deployed and operated in a secure and responsible way,” the executive summary states.

The U.K. and U.S. developed the global guidelines which follow a ‘secure by default’ approach.

Under design, the guidelines state IT leaders should understand the risks and threat modelling. They should also consider any trade-offs in the design of the system and model.

For development, the guidelines focus on supply chain security and documentation, while the focus for deployment includes protecting infrastructure from compromise. Guidelines for secure operation and maintenance focus on actions once organizations have deployed AI systems, including monitoring and information sharing.

“The ultimate goal of this guide is to provide considerations and mitigations advice to help reduce the overall risk to an organizational AI system development process,” the Canadian Centre for Cyber Security said in a press release.

Canada has yet to implement its own regulation on AI systems. However, the federal government recently launched a voluntary code of conduct, following the steps of its U.S. counterpart.

Image credit: Shutterstock 

Source: Guidelines for secure AI system development, Cyber Centre

Related Articles