AI Governance on the Ground: Canada’s Algorithmic Impact Assessment Process and Algorithm has evolved
WPF’s AI Governance on the Ground Series:
This series highlights and expands on topics and issues from WPF’s Risky Analysis report and survey of AI governance tools
Download AI Governance on the Ground: Canada’s AIA Process (PDF, 6 pages)
How Canada’s Algorithmic Impact Assessment Process and Algorithm Has Evolved
By Kate Kaye, Deputy Director
14 August 2024
Canadian government agencies including its employment and transportation agencies, its Department of Veterans Affairs, and the Royal Canadian Mounted Police have evaluated the automated systems they use according to the country’s Algorithmic Impact Assessment process, or AIA. But Canada’s AIA process itself has evolved. Designers of that assessment framework — required since the country’s Directive on Automated Decision-Making went into effect in April 2019 – have evaluated and re-evaluated the AIA, updating its criteria, requirements, and risk-level scoring algorithm along the way.
The team at Treasury Board of Canada Secretariat (TBS) overseeing its evolution call Canada’s AIA a work in progress. Canada’s AIA is comprised of several questions intended to determine risk and reduce potential negative impacts of automated systems. Answers related to a system’s design, algorithm, decision type, impact and data all factor into a numerical score measuring the risk level of the system evaluated.
Of course, AIAs have been adopted by governments around the world as tools for governing AI. Canada’s AIA process is just one AI governance tool featured in World Privacy Forum’s December 2023 report, Risky Analysis: Assessing and Improving AI Governance Tools, An international review of AI Governance Tools and suggestions for pathways forward . This report surveyed AI governance tools from around the world and offers suggestions for pathways toward improving them. Canada’s AIA process has similarities to other AI governance tools reviewed in the Risky Analysis report from the Government of Dubai and Kwame Nkrumah University of Science and Technology in Ghana; they all are reliant on self-assessment to produce quantified ratings or scores intended to measure risk or aspects of AI systems such as fairness.
World Privacy Forum spoke in March with two key members of the AIA oversight team at the Treasury Board: Benoit Deshaies, Director of Responsible Data and Artificial Intelligence for the Office of the Chief Data Officer of Canada and Dawn Hall, Advisor, Responsible Data and AI, Office of the Chief Information Officer. Both oversee various aspects of Canada’s AIA implementation and design process updates.
Deshaies — who has a computer science background and worked in various Canadian IT and machine learning government roles before joining TBS — addresses the computer science and technical aspects of the assessments. And Hall — a PhD in Biological Sciences who has shifted her science and data analysis expertise into the data governance policy realm – focuses on policy.
Canadian agencies have published 22 AIAs evaluating automated systems they have aimed to use, and those AIA documents are publicly-available in the country’s open government data and information repository. Transport Canada, for instance, recently evaluated its Pre-load Air Cargo Targeting (PACT) Program, an automated approach to measuring risk of inbound air shipments that could contain explosive devices or other threat items prior to loading and departure to Canada. The Department of Veterans affairs assessed its system for Automation Development to Support Disability Benefit Decision Making.
Canada’s AIA scoring algorithm is an important way to correlate the level of risk of a system to the stringency of requirements for its use, explained Hall. The scoring algorithm works by assigning points to questionnaire answers – the more points, the higher the risk level.
The impulse to quantify measurement and improvements to AI systems in the hopes of reducing AI risks carries with it its own potential problems. As discussed in detail in WPF’s Risky Analysis report, ratings or scores produced using AI governance tools such as Algorithmic Impact Assessments can create additional risk as a result of errors or misinterpretation, especially if there is a lack of documentation and guidance for use of the tool. For example, attempting to de-bias AI systems by simplifying and decontextualizing complex legal, fairness-related concepts such as disparate impact has emerged as a flawed approach within the AI governance tool environment (See Use Cases in AI Fairness on page 25 of our report).
In the end, inappropriate usage and interpretation of measurements can result in gaps between what people want AI governance tools to accomplish, and what these tools actually accomplish.
Algorithmic Scoring Updates and New Privacy and Gender-Based Analysis
Canada’s AIA scoring algorithm has been adjusted as new questions have been added. To determine the number of points assigned to a particular question, TBS has determined its significance in relation to other questions. For instance, the use of personal data in a system increases points attributed, potentially resulting in a higher overall risk level.
In addition to adjusting the AIA scoring algorithm over the years, the TBS team also added questions to assess the role of Personally-identifiable Information in systems under evaluation, and to address issues related to deidentified data.
An AIA update in 2023 added another layer of assessment requiring agencies to evaluate the impacts of algorithmic systems on particular populations and according to gender and age considerations. The evaluation is based on Canada’s Gender-Based Analysis Plus, an intersectional analysis that, in addition to considering biological sex and gender, considers factors such as age, disability, education, ethnicity, economic status, geography, language, race, religion, and sexual orientation to understand impact.
The caliber of Canada’s impact assessments has improved in the five years since the AIA process began, said Deshaies, who acknowledged that not all of Canada’s AIAs are created equal. In general, while there are no requirements that agencies work with his team, Deshaies suggested that when they do, it helps ensure a more robust assessment.
Engagement takes on many forms, from monthly virtual meetings to answer agency questions and provide guidance and feedback on their AIAs, to presentations by Deshaies about AIAs, AI technology issues and related policy. TBS also partnered with Canada’s School of Public Service to create a course for public servants about using generative AI for government purposes, a complement to a guide on the same topic.
One key to crafting better AIAs? A multidisciplinary approach, said Deshaies and Hall. They suggest that AIA quality improves when agencies include staff from multiple practice areas and disciplines such as legal and sector-specific experts along with IT and computer science experts. Earlier AIAs from Canadian agencies were often conducted under the purview of IT departments, but Deshaies suggested that computer science and tech experts without backgrounds in social and human rights impacts or legal concepts may be ill-equipped to complete assessments.
A more multidisciplinary approach has helped round out and improve AIAs from Canada’s immigration agency — Immigration, Refugees and Citizenship Canada (IRCC) — said Hall. That matters: nearly half of Canada’s published AIAs evaluate automated systems used by IRCC.
The Real-World Impact of Impact Assessments
Ultimately, Algorithmic Impact Assessments should be meaningful governance tools, not only evaluating risks and helping spotlight ways to improve algorithmic systems, but creating genuine transparency and accountability around their use.
The assessments from IRCC are the primary public source of information available about the automated and algorithmic systems determining crucial elements of the lives of the immigrant and refugee clients of Canadian immigration and refugee lawyer William Tao.
In fact, Tao, the Founder of Heron Law Offices in Burnaby, British Columbia, suggested in a discussion in March with World Privacy Forum that the automated systems used by IRCC are creating profound shifts in the day-to-day work of lawyers defending immigrants and refugees in Canada.
Though he’s been critical of the AIA process, Tao said he was surprised to see that an assessment of an automated triage tool created by IRCC to assist in processing applications for Canada’s international youth work program included additional revealing documentation. That Gender-Based Analysis Plus document, showing how the tool was measured according to gender- and age-related criteria, was not something Tao had ever expected to see. In fact, in a social media post about it, Tao called publication of the GBA Plus report “unheard of in terms of public disclosure.”
Tao and others in the immigration law community including his colleague Mario D. Bellissimo, a citizenship and immigration lawyer and Founder of Bellissimo Law Group, have spent more and more time investigating the use of automated and AI-based decision support tools by IRCC, how those tools are built and used, and their impacts on immigrants and refugees. To varying degrees, the AIAs offer a glimpse into the inner-workings of the automated systems that affect the lives of their clients, sometimes influencing decisions about immigration case risk, whether immigrants or refugees can legally work, and even whether people must separate from their spouses or children.
In the past, legal watchdogs including Tao had requested access to GBA Plus reports associated with immigration-related AIAs with little luck. The rare publication of the GBA Plus report helped them discover that information such as travel history, medical requests and country-of-origin affects the ways applicants are categorized.
The use of these systems in Canada’s corner of immigration procedures offers a tangible example of how AI is seeping its way into the very pipes of governance and policy. The scenario shows us just how important the design and approach of AI Governance Tools like Algorithmic Impact Assessments are and will be for years to come.
As for Tao, he said the new level of algorithmic transparency from IRCC was a breath of fresh air. Going forward, he said he hopes the agency will open up even more.
World Privacy Forum’s December 2023 AI Governance Tools Report
Risky Analysis: Assessing and Improving AI Governance Tools, An international review of AI Governance Tools and suggestions for pathways forward https://www.worldprivacyforum.org/2023/12/new-report-risky-analysis-assessing-and-improving-ai-governance-tools/
Canada’s entry in WPF’s AI Governance Tools Survey begins on p. 66 of the report.
References
Government of Canada, Department of Employment and Social Development, Algorithmic Impact Assessment of a Machine Learning Model to Triage Reduction of Older Claim Recalculations. https://open.canada.ca/data/en/info/24d2cab2-6a0d-4234-9239-b6ce102ebabd/resource/437a8cc2-da3a-4ed7-abb1-d45dfd7af0e3.
Government of Canada, Department of Veterans Affairs, Algorithmic Impact Assessment Results — Automation Development to Support Disability Benefit Decision Making.
https://open.canada.ca/data/en/info/aafdfbcd-1cdb-4913-84d5-a03df727680c/resource/d81de44b-1c77-48b9-9b40-d4ea452bc610 .
Government of Canada, Policies, directives, standards, and guidelines, Directive on Automated Decision-Making. https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592.
Government of Canada, Women and Gender Equality Canada, Gender-Based Analysis Plus.
https://www.canada.ca/en/women-gender-equality/gender-based-analysis-plus.html .
Government of Canada, Guide on the Use of Generative Artificial Intelligence. https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/guide-use-generative-ai.html.
Government of Canada, Open Data Portal, International experience Canada work permit eligibility model. Datasets referenced: Immigration, Refugees and Citizenship Canada’s Algorithmic Impact Assessment; Gender-based Analysis Plus Model. Datasets available at: https://open.canada.ca/data/en/dataset/b4a417f7-5040-4328-9863-bb8bbb8568c3.
Government of Canada, Learning Catalogue, Using Generative AI in the Government of Canada (DDN321) Virtual Course (English and French available). https://catalogue.csps-efpc.gc.ca/product?catalog=DDN321&cm_locale=en .
Government of Canada, Open Government Data and Information Repository.
https://search.open.canada.ca/opendata/?collection=aia&page=1&sort=date_modified+desc .
Government of Canada, Royal Canadian Mounted Police, Algorithmic Impact Assessment for its Griffeye Tool designed to assist in the categorization and classification of child sexual exploitation images and videos. https://open.canada.ca/data/en/dataset/89898244-aaae-4591-ba9b-fe5cd81d5924 .
Government of Canada, Transport Canada, Pre-load Air Cargo Targeting (PACT) Program Algorithmic Impact Assessment. https://open.canada.ca/data/en/dataset/c088f841-2d79-4c7e-9281-cc65cbae1b06 .
Government of Dubai, Digital Dubai’s AI System Ethics Self-Assessment Tool
https://www.digitaldubai.ae/self-assessment.
Kwame Nkrumah University of Science and Technology, Responsible Artificial Intelligence Lab Ghana, FACETS Responsible AI Framework https://facets.netlify.app/facets#envision .
Mario. D. Bellissimo, LLB., C.S. Techno Centric-Decision-Making in Canadian Immigration Law and Practice: Artificial Intelligence Deployment; How Can the Existing Canadian Immigration Legal Eco-System and Immigration Advocates Respond to the Use of AI Technologies? http://products.thomsonreuters.ca/lawreportedigest/pdfs/99immlr4th47.pdf .
Publication information:
Author: Kate Kaye, Deputy Director WPF
Editing: Pam Dixon, Executive Director WPF
Original publication date: 14 August 2024
URL: https://www.worldprivacyforum.org/2024/08/ai-governance-on-the-ground-series-canada/