Country: Canada

AI Governance on the Ground: Canada’s Algorithmic Impact Assessment Process and Algorithm has evolved

WPF’s “AI Governance on the Ground Series” highlights and expands on topics and issues from WPF’s Risky Analysis report and its survey of AI tools. In this first publication of the series, we highlight how Canadian government agencies are implementing AI governance and algorithmic transparency mechanisms across various agencies, including its employment and transportation agencies, its Department of Veterans Affairs, and the Royal Canadian Mounted Police, among others. The agencies have evaluated the automated systems they use according to the country’s Algorithmic Impact Assessment process, or AIA, and the assessment results are public. Designers of this assessment framework — required since the country’s Directive on Automated Decision-Making went into effect in April 2019 – have now re-evaluated the AIA, updating its criteria, requirements, and risk-level scoring algorithm along the way. WPF interviewed government officials as well as key Canadian end-users of the assessments to capture the full spectrum of how the AIA is working at the ground level.