AI Governance Tools

AI Governance on the Ground: Canada’s Algorithmic Impact Assessment Process and Algorithm has evolved

WPF’s “AI Governance on the Ground Series” highlights and expands on topics and issues from WPF’s Risky Analysis report and its survey of AI tools. In this first publication of the series, we highlight how Canadian government agencies are implementing AI governance and algorithmic transparency mechanisms across various agencies, including its employment and transportation agencies, its Department of Veterans Affairs, and the Royal Canadian Mounted Police, among others. The agencies have evaluated the automated systems they use according to the country’s Algorithmic Impact Assessment process, or AIA, and the assessment results are public. Designers of this assessment framework — required since the country’s Directive on Automated Decision-Making went into effect in April 2019 – have now re-evaluated the AIA, updating its criteria, requirements, and risk-level scoring algorithm along the way. WPF interviewed government officials as well as key Canadian end-users of the assessments to capture the full spectrum of how the AIA is working at the ground level.

Deputy director Kate Kaye attending ACM FAccT conference in Rio de Janeiro, Brazil

Deputy Director Kate Kaye is in Rio de Janeiro Brazil from 3-6 June to attend the leading conference on Artificial Intelligence and trustworthy AI in socio-technical systems, ACM’s Fairness, Accountability, and Transparency (ACM FAccT). While at the conference, Kaye will be interviewing paper authors and leading AI experts for forthcoming WPF podcasts, and to inform additional work.

WPF advises NIST regarding synthetic content and data governance

WPF filed comments with the US National Institute of Standards and Technology regarding its draft governance plan regarding synthetic content. WPF’s comments focused on 7 recommendationsWPF’s comments focused on 7 recommendations ranging from technical to policy issues. One overarching recommendation was that NIST ensure that human rights were attended to in all of its plans. Additional recommendations include requesting that NIST attend to the risks of digital exhaust in metadata, ensure that biometric data is included in the guidance, among other recommendations.

WPF announces participation in the National Institute of Standards and Technology (NIST) AI Safety Institute Consortium (AISIC)

The World Privacy Forum is pleased to announce that it has joined more than 200 of the nation’s leading artificial intelligence (AI) stakeholders to participate in a Department of Commerce initiative to support the development and deployment of trustworthy and safe AI. Established by the Department of Commerce’s National Institute of Standards and Technology (NIST) in February 2024, the U.S. AI Safety Institute Consortium (AISIC) brings together AI creators and users, academics, government and industry researchers, and civil society organizations to meet this mission.

WPF Comments to OMB regarding AI and Privacy Impact Assessments

The World Privacy Forum has filed detailed comments to the U.S. Office of Management and Budget (OMB) in response to its Request for Information on Privacy Impact Assessments. Specifically, OMB requested information about how the U.S. Federal government should update or adjust its requirements for Privacy Impact Assessments (PIAs) in regards to changes to data ecosystems brought about by Artificial Intelligence (AI). WPF provided substantive recommendations regarding administrative provisions of the Privacy Act, scalable automated AI governance tools for privacy and trustworthy AI, ensuring nimble processes for privacy and AI assessments, and ensuring balanced, skillful socio-legal-technical decisionmaking.