AI

WPF suggests solutions to OMB for handling Commercially Available Information, including exploring a formal, inclusive Voluntary Consensus Standards process to address challenges

WPF submitted comments regarding how commercially available information (CAI)  — also known as data broker data — will be handled by U.S. Executive Agencies. The Request for Information from OMB was an important opportunity to comment on a topic that has only rarely been opened for public comment. OMB Request for Information regarding Executive Branch

Remarks of Pam Dixon at the First Digital Trust Convention held at OECD in Paris; WPF co-sponsor

The first Digital Trust Convention was held in Paris at OECD Headquarters on 15 November, 2024. This event addressed the problems of how to establish trust in people and information in digital spaces, including the challenges created by synthetic content generated or impacted by AI. WPF co-sponsored the event, and Executive Director Pam Dixon was in Paris to participate in person. Her remarks focused on: solutions must do no harm, cautions around inappropriate uses of digital ID, and respect for socio-technical contexts.

WPF Deputy Director to present AI Governance tools research regarding measurement of content authenticity in keynote speech

WPF Deputy Director Kate Kaye will present her ongoing research regarding AI Governance in Los Angeles on November 12, 2024 in a keynote talk “Deep fakes, AI, and the Era of Content Authenticity” at the CIMM West event in Los Angeles, a gathering of around 200 media and advertising industry data measurement and analytics professionals.

AI Governance on the Ground: Chile’s Social Security and Medical Insurance Agency Grapples with Balancing New Responsible AI Criteria and Vendor Cost

The minute decisions, measurements and methods embedded inside the tools used to govern AI systems directly affect whether policy implementations actually align with policy goals. The government of Chile’s experience using its AI bidding template, and questions inside the agency regarding how to weigh traditional tech procurement criteria such as vendor cost along with newer responsible AI criteria like discriminatory impacts, give a glimpse of the AI governance challenges happening on the ground today. The tensions the Chilean government is dealing with may be a sign of what other organizations around the world could encounter as they put their own responsible AI policies into practice and navigate the policy implications of AI-facilitated decision making.

AI Governance on the Ground: Canada’s Algorithmic Impact Assessment Process and Algorithm has evolved

WPF’s “AI Governance on the Ground Series” highlights and expands on topics and issues from WPF’s Risky Analysis report and its survey of AI tools. In this first publication of the series, we highlight how Canadian government agencies are implementing AI governance and algorithmic transparency mechanisms across various agencies, including its employment and transportation agencies, its Department of Veterans Affairs, and the Royal Canadian Mounted Police, among others. The agencies have evaluated the automated systems they use according to the country’s Algorithmic Impact Assessment process, or AIA, and the assessment results are public. Designers of this assessment framework — required since the country’s Directive on Automated Decision-Making went into effect in April 2019 – have now re-evaluated the AIA, updating its criteria, requirements, and risk-level scoring algorithm along the way. WPF interviewed government officials as well as key Canadian end-users of the assessments to capture the full spectrum of how the AIA is working at the ground level.