NIST seeks input on first draft of AI risk management framework
Initial effort addresses risks in the design, development and use of artificial intelligence across industries, including healthcare.
Comments are due Friday, April 29 on a risk management framework for artificial intelligence that is being developed by the National Institute of Standards and Technology.
The framework is part of an effort by NIST to create rules of the road to ensure that AI technologies and systems are trustworthy and responsible.
The agency’s work is likely to have a receptive audience in healthcare, where concerns have emerged about the data and processes used to develop algorithms via AI that could directly impact patient care.
Several initiatives are underway within healthcare to provide guidance to AI use within the industry, including the recently formed Coalition for Health AI, an initiative being led by John Halamka, MD, of the Mayo Clinic, along with Such Saria at Johns Hopkins Medicine, Nigam Shah at Stanford Health Care and Brian Anderson of MITRE Corp.
NIST released the initial draft of its AI framework in mid-March. It’s a voluntary framework that seeks to address risks in the design, development, use and evaluation of AI systems. It is not solely aimed at healthcare, but is intended to offer broad guidance in several arenas in which AI is used, mentioning commerce, transportation, cyber security as well as healthcare.
NIST describes it as an effort to create a voluntary framework “to improve understanding and help manage enterprise and societal risks related to AI systems. It aims to provide a flexible, structured and measurable process to address AI risks throughout the AI lifecycle, and it offers guidance for the development and use of trustworthy and responsible AI.”
Comments can be submitted in writing to AIframework@nist.gov by Friday, April 29.
The draft is built on an earlier concept paper released in December; the agency expects to conduct a second workshop on the draft framework in the summer or early fall, with the final framework to be published in late 2022 or 2023.
The framework initiative is consistent with NIST’s broader AI efforts, recommendations by the National Security Commission on Artificial Intelligence, and the Plan for Federal Engagement in AI Standards and Related Tools. Congress has directed NIST to collaborate with the private and public sectors to develop the risk framework for AI.
NIST’s proposal indicates that the framework and supporting resources “will be updated and improved based on evolving technology and the standards landscape around the globe. In addition, as the AI RMF is put into use, additional lessons will be learned that can inform future updates and additional resources.”
In conjunction with the release of the framework, NIST also released a special publication addressing identifying and managing bias within AI. The publication bias “is related to broader societal factors — human and systemic institutional in nature — which influence how AI technology is developed and deployed.”
The draft framework and publication on bias are part of NIST’s larger effort to support the development of trustworthy and responsible AI technologies.
A recent article written by Halamka, Saria and Shah highlights the need for “guardrails” to provide guidance for the development of AI algorithms for healthcare. It notes that “Experts have been aware that data shifts — which happen when an algorithm must process data that differ from those used to create and train it — adversely affect algorithmic performance. State-of-the-art tools and best practices exist to tackle it in practical settings. But awareness and implementation of these practices vary among AI developers.”
The Coalition for Health AI is espousing codification within the industry that includes:
- Using algorithm “labeling” that describes the data used for its development, its usefulness and limitations;
- Conducting ongoing testing and monitoring of algorithm performance;
Developing best practices and approaches for appropriate clinical use and understanding clinical contexts and goals to better assess risks and adapt to local variations.