NIST releases milestone AI Risk Management Framework to foster trustworthy AI ecosystems
This week has been an important one for U.S. policy regarding rights-preserving artificial intelligence and how to manage, define, and improve AI in practical implementations. There are two significant news items.
First, the National Institute of Technology and Standards (NIST) has released its milestone AI Risk Management Framework (1.0) for voluntary use. The AI Risk Management Framework (AI RMF) is robust, and has taken several years to complete. It was completed with national and international inputs, and includes hundreds of formal stakeholder comments. The Framework is an important AI landmark for the US, and it will help harmonize baseline definitions, workflows and other aspects of AI approaches within the U.S. and across jurisdictions. It will also improve the ability of AI actors to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
The NIST AI Framework is a “living document” and will be updated in an ongoing fashion, as will the AI RMF Playbook, a companion document to the Framework that suggests ways to navigate and use the RMF. NIST is accepting comments for the Playbook at: AIframework@nist.gov through 27 February 2023. Going forward, NIST will be integrating comments on a semi-annual basis.
In addition to U.S. stakeholder inputs through its multistakeholder process, NIST has also been working cooperatively with the OECD’s AI Working Party (AIGO) to integrate the NIST AI Risk Management Framework with the existing OECD Recommendation on AI (2019), which is an extremely influential multilateral soft law recommendation. WPF became a member of the OECD AI Network of Experts in 2018, and assisted with the drafting of the OECD Guidelines on AI. WPF is currently an active member of the AI Risk and Accountability expert group at the OECD.
The second important piece of news is that the National AI Research Resource (NAIRR) Task Force published its report, Strengthening and Democratizing the U.S. Artificial Intelligence Innovation Ecosystem: An implementation plan for a national artificial intelligence research resource . The implementation plan for a national “cyber infrastructure” would connect the U.S. research community with the resources necessary to innovate and improve AI and machine learning. This plan has been established with four goals: to (1) spur innovation, (2) increase diversity of talent, (3) improve capacity, and (4) advance trustworthy AI. The Task Force is co-chaired by the White House Office of Science and Technology Policy.
In its discussion of advancing trustworthy AI, the Task Force specifically called out the importance of privacy and civil liberties:
The NAIRR must be proactive in addressing privacy, civil rights, and civil liberties issues by integrating appropriate technical controls, policies, and governance mechanisms from its outset. The Operating Entity should work with its Ethics Advisory Board to develop criteria and mechanisms for evaluating proposed research and resources for inclusion in the NAIRR from a privacy, civil rights, and civil liberties perspective. (p. v)
Additionally, the plan specifically calls out the importance of being inclusive across a range of stakeholders, stating:
A variety of scientific and advocacy groups—scientific societies and associations; groups concerned with data privacy, civil rights, and civil liberties implications of AI; philanthropic organizations; and academic researchers—should have the opportunity to leverage the NAIRR for research and evaluation that promote the responsible development and use of AI. (p. 11)
This is welcome news for the growing NGO community that is focusing on AI policy, AI governance, ethical AI, responsible AI, green AI, and more aspects of AI ecosystems and governance.
The release of the NIST AI RMF and the Task Force report plus the prior publication of the Blueprint for an AI Bill of Rights marks now three steps in the right direction towards creating a more responsible and trustworthy AI ecosystem in the U.S., and it inches the U.S. toward having, at last, the beginning elements of a National AI Strategy in place.