Canada has taken a significant stride in the regulation of artificial intelligence (“AI”) with the introduction of the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems (“AI Code”), following consultations with various stakeholders, including the Advisory Council on Artificial Intelligence. The AI Code provides best practices for industry as AI tools continue to be integrated into business operations in Canada and globally. It also underscores Canada’s active participation in the ongoing international dialogue regarding the responsible use and regulation of generative AI.

The AI Code introduces key guardrails when implementing AI systems:

Accountability: Organizations should establish a risk management framework that aligns with the scale and impact of their activities.

Safety: Organizations should assess the impact of their AI systems and take measures to mitigate safety risks, including addressing any potential for malicious or inappropriate uses.

Fairness and equity: Organizations should continuously evaluate and test their systems to identify and rectify biases throughout their development and use.

Transparency: Organizations should provide information about their AI systems and make it possible to identify AI-generated content.

Human oversight and monitoring: Organizations should monitor their AI systems and report and address any incidents.

Validity and robustness: Organizations should test their systems to ensure they work effectively and are secure against attacks.

The AI Code is a voluntary, interim measure, pending the progress of the draft Artificial Intelligence and Data Act (“AIDA”) through the federal legislative process. Introduced in June 2022, AIDA is one of the first legal frameworks for AI introduced globally. It was introduced within a broader bill that also proposes significant reforms to federal privacy legislation, which has resulted in a relatively slow legislative process since its introduction. Consequently, the Code will serve as an interim measure until the legislation is officially enacted, marking a material step toward the responsible regulation of AI technology in Canada.

Advanced AI systems offer great potential across industries, but presents risks to privacy, security and human rights if not managed ethically. The voluntary AI Code proposes measures focused on accountability, safety, fairness, transparency, human oversight, and system validity to address these concerns, and is likely to be the roadmap to forthcoming legislation regulating AI in Canada.