Skip to Content

AI

WPF provides comments on U.S. AI Action Plan; urges support for NIST AISIC and advancing trustworthy metrology of AI governance tools

WPF provided comments to the U.S. National Science Foundation and the Office of Science and Technology Policy regarding its priorities for the U.S. AI Action Plan. WPF's comments focused on 4 key points, including the importance of supporting the NIST AI Safety Institute Consortium, and support for building a verifiable, repeatable evaluative environment for testing and measuring AI governance tools so as to foster trustworthy AI systems and ecosystems, inclusive of privacy.

Deputy director Kate Kaye leading roundtable discussion at GWU conference on ethical frameworks and guidelines for AI

WPF deputy director Kate Kaye will facilitate a roundtable discussion among academic scholars, industry representatives and others addressing concerns and considerations related to synthetic content and use of synthetic content governance tools. Kate will help guide the discussion during the Organizational Applications for Identifying and Tagging Synthetic Content roundtable. In ...

WPF Executive Director Pam Dixon to give talk about Modern Privacy in an AI Era live with Washington State Office of Privacy and Data Protection

WPF Executive Director Pam Dixon will be giving a rare, live one hour Q and A session with the Washington Office of Privacy and Data Protection (OPDP) , which was created by the state legislature in 2016. This will take place in celebration of International Privacy Week, 30 January 2025. ...

WPF suggests solutions to OMB for handling Commercially Available Information, including exploring a formal, inclusive Voluntary Consensus Standards process to address challenges

WPF submitted comments regarding how commercially available information (CAI) — also known as data broker data — will be handled by U.S. Executive Agencies. The Request for Information from OMB was an important opportunity to comment on a topic that has only rarely been opened for public comment. OMB Request ...

Remarks of Pam Dixon at the First Digital Trust Convention held at OECD in Paris; WPF co-sponsor

The first Digital Trust Convention was held in Paris at OECD Headquarters on 15 November, 2024. This event addressed the problems of how to establish trust in people and information in digital spaces, including the challenges created by synthetic content generated or impacted by AI. WPF co-sponsored the event, and Executive Director Pam Dixon was in Paris to participate in person. Her remarks focused on: solutions must do no harm, cautions around inappropriate uses of digital ID, and respect for socio-technical contexts.

AI Governance on the Ground: Chile’s Social Security and Medical Insurance Agency Grapples with Balancing New Responsible AI Criteria and Vendor Cost

The minute decisions, measurements and methods embedded inside the tools used to govern AI systems directly affect whether policy implementations actually align with policy goals. The government of Chile’s experience using its AI bidding template, and questions inside the agency regarding how to weigh traditional tech procurement criteria such as vendor cost along with newer responsible AI criteria like discriminatory impacts, give a glimpse of the AI governance challenges happening on the ground today. The tensions the Chilean government is dealing with may be a sign of what other organizations around the world could encounter as they put their own responsible AI policies into practice and navigate the policy implications of AI-facilitated decision making.

AI Governance on the Ground: Canada's Algorithmic Impact Assessment Process and Algorithm has evolved

WPF’s “AI Governance on the Ground Series” highlights and expands on topics and issues from WPF’s Risky Analysis report and its survey of AI tools. In this first publication of the series, we highlight how Canadian government agencies are implementing AI governance and algorithmic transparency mechanisms across various agencies, including its employment and transportation agencies, its Department of Veterans Affairs, and the Royal Canadian Mounted Police, among others. The agencies have evaluated the automated systems they use according to the country’s Algorithmic Impact Assessment process, or AIA, and the assessment results are public. Designers of this assessment framework — required since the country’s Directive on Automated Decision-Making went into effect in April 2019 – have now re-evaluated the AIA, updating its criteria, requirements, and risk-level scoring algorithm along the way. WPF interviewed government officials as well as key Canadian end-users of the assessments to capture the full spectrum of how the AIA is working at the ground level.

Deputy director Kate Kaye attending ACM FAccT conference in Rio de Janeiro, Brazil

Deputy Director Kate Kaye is in Rio de Janeiro Brazil from 3-6 June to attend the leading conference on Artificial Intelligence and trustworthy AI in socio-technical systems, ACM's Fairness, Accountability, and Transparency (ACM FAccT). While at the conference, Kaye will be interviewing paper authors and leading AI experts for forthcoming WPF podcasts, and to inform additional work.

WPF advises NIST regarding synthetic content and data governance

WPF filed comments with the US National Institute of Standards and Technology regarding its draft governance plan regarding synthetic content. WPF's comments focused on 7 recommendationsWPF's comments focused on 7 recommendations ranging from technical to policy issues. One overarching recommendation was that NIST ensure that human rights were attended to in all of its plans. Additional recommendations include requesting that NIST attend to the risks of digital exhaust in metadata, ensure that biometric data is included in the guidance, among other recommendations.

WPF announces participation in the National Institute of Standards and Technology (NIST) AI Safety Institute Consortium (AISIC)

The World Privacy Forum is pleased to announce that it has joined more than 200 of the nation’s leading artificial intelligence (AI) stakeholders to participate in a Department of Commerce initiative to support the development and deployment of trustworthy and safe AI. Established by the Department of Commerce’s National Institute of Standards and Technology (NIST) in February 2024, the U.S. AI Safety Institute Consortium (AISIC) brings together AI creators and users, academics, government and industry researchers, and civil society organizations to meet this mission.

WPF to speak before the State House of Mongolia for its National Consultation on e-Health, and before the Human Rights Commission of Mongolia

5 April 2024, Paris, France — World Privacy Forum Executive Director Pam Dixon has been invited to speak at the State House of Mongolia for the Government of Mongolia’s National Consultation on e-Health. She will be speaking twice at this event; first, on the topic of Artificial Intelligence in Healthcare and second, on Big Data in e-Health.  She will be presenting later in the week on AI governance and Privacy before the Ministry of Digital Development and Communications, and on the topic of AI Governance Tools before the National Human Rights Commission of Mongolia. All speeches will take place in Ulaanbaatar, Mongolia.

Initial Analysis of the new U.S. governance for Federal Agency use of Artificial Intelligence, including biometrics

Today the Biden-Harris Administration published a Memorandum that sets forth how U.S. Federal Agencies and Executive Departments will govern their use of Artificial Intelligence. The OMB memorandum provides an extensive and in some ways surprising articulation of emergent guardrails around modern AI. There are many points of interest to discuss, but the most striking includes the thread of biometrics systems guidance throughout the memorandum and continuing on in the White House Fact Sheet and associated materials. Additionally, the articulation of minimum practices for safety -impacting and rights- impacting AI will likely become important touch points in regulatory discussions in the U.S. and elsewhere. The guidance represents a significant policy shift for the U.S. Federal government, particularly around biometrics.

WPF comments to OMB regarding its Draft Memorandum on establishing new Federal Agency requirements for uses of AI

In December 2023, WPF submitted detailed comments to the U.S. Office of Management and Budget regarding its Request for Comments on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Memorandum.  OMB published the request in the Federal Register on November 3, 2023. This particular Memorandum is of historic importance, as it articulates the establishment of new agency requirements in the areas of AI governance, innovation, and risk management, and would direct agencies to adopt specific minimum risk management practices for uses of AI that impact the rights and safety of the public.

Report: Risky Analysis: Assessing and Improving AI Governance Tools

We are pleased to announce the publication of a new WPF report, “Risky Analysis: Assessing and Improving AI Governance Tools.” This report sets out a definition of AI governance tools, documents why and how these tools are critically important for trustworthy AI, and where these tools are around the world. The report also documents problems in some AI governance tools themselves, and suggests pathways to improve AI governance tools and create an evaluative environment to measure their effectiveness. AI systems should not be deployed without simultaneously evaluating the potential adverse impacts of such systems and mitigating their risks, and most of the world agrees about the need to take precautions against the threats posed. The specific tools and techniques that exist to evaluate and measure AI systems for their inclusiveness, fairness, explainability, privacy, safety and other trustworthiness issues — called in the report collectively AI governance tools – can improve such issues. While some AI governance tools provide reassurance to the public and to regulators, the tools too often lack meaningful oversight and quality assessments. Incomplete or ineffective AI governance tools can create a false sense of confidence, cause unintended problems, and generally undermine the promise of AI systems. The report contains rich background details, use cases, potential solutions to the problems discussed in the report, and a global index of AI Governance Tools.

Half-day tutorial on AI Governance, Data Protection, and Privacy: Advanced problem-solving for Computer Vision and More

WPF has organized a robust and interactive tutorial on advanced AI governance and privacy for Computer Vision systems (and beyond), to be held at the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). WACV is the premier international computer vision event comprised of a main conference and several co-located workshops and tutorials. What makes this AI governance and data protection tutorial compelling? The 8 speakers for this tutorial are working at the top of their respective fields, with presentations that combine to make a muscular, socio-technical dive into today’s most pressing issues around AI technology, governance, privacy, and policy structures. This tutorial is arranged in a logical flow that moves participants through the technical and the policy aspects of advanced systems development and governance. including technical, legal, ethical, and privacy analysis, as well as emerging norms and additional considerations to be aware of. The tutorial will include ample time for analysis and discussion, and will be participatory.

WPF's contribution to ID4Africa Workshop on Privacy and Data Protection in ID Systems, Nairobi, Kenya 2023

The World Privacy Forum is pleased to provide a summary of Executive Director Pam Dixon's work in Nairobi, Kenya at the ID4Africa AGM. Dixon served as the Senior Special Rapporteur for two workshops at the 2023 ID4Africa Annual General Meeting in Nairobi, Kenya 23-25 May.In June, ID4Africa hosted a live ...

Emerging Technologies, Human Subject Research, and the Common Rule: High level overview of the 2023 OHRP Research Community Forum

Earlier this month, WPF attended a joint conference focused on the shifting dynamics of how the Common Rule that governs human subject research in the US will be interpreted amidst new technological shifts such as AI. The department of Health and Human Services is seeking to define what the next steps and new policy frameworks should be to ensure the Common Rule protects individuals in current and future research environments. Details on the presentations, conversations, and key takeaways in the post.

NIST releases milestone AI Risk Management Framework to foster trustworthy AI ecosystems

This week has been an important one for U.S. policy regarding rights-preserving artificial intelligence and how to manage, define, and improve AI in practical implementations. There are two significant news items. First, the National Institute of Technology and Standards (NIST) has released its milestone AI Risk Management Framework (1.0) for ...

WPF advises Secretary's Advisory Committee on Human Research Protection regarding its proposed AI Framework

WPF recently reviewed and provided recommendations regarding a proposed AI Framework meant to apply to medical research involving human subjects. The issue of human subject research is a critically important one. In the US, The Common Rule (45 CFR subpart A) is a key regulation that protects people from unethical medical research. As research utilizing tools such as AI and SaMD -- software as a medical device -- grows in use, there is an urgent need to determine the proper ethical, legal, and regulatory framework for the use of these tools in the human subject research context. For this reason, WPF was pleased to review and provide recommendations to the Secretary's Advisory Committee on Human Research Protections, SACHRP, on its proposed AI Framework.

ISPI Forum on Digital Transformation, WPF speaker 

The Italian Institute for International Political Studies (ISPI) will be holding a High Level Forum on Digital Transformation in connection with OECD. The event will be held 16 May 2022 in hybrid format. WPF will be speaking about what risks exists for consumers arising from illicit use of their personal ...

Skip to Top