On January 26th, The National Institute of Standards and Technology released the first version of its AI Risk Management Framework (AI RMF 1.0), providing guidelines for organizations to manage and mitigate the risks associated with artificial intelligence (AI) systems.
In an era where AI is increasingly integrated into various industries, the release of this framework is a timely and crucial step towards promoting the responsible and ethical use of AI.
NIST’s AI RMF addresses the risks involved in the design, development, use, and evaluation of AI products, services, and systems.
While the document is intended for voluntary use, it is important for organizations to consider the NIST’s risk management best practices in order to implement effective AI governance processes.
The AI RMF aims to provide a flexible, structured, and measurable process to address AI risks prospectively and continuously throughout the AI lifecycle.
The core characteristics and functions of trustworthy AI
The first part of the AI RMF highlights the core characteristics of a trustworthy AI. According to NIST, AI should be:
- Valid and Reliable
- Safe
- Secure and Resilient
- Accountable and Transparent
- Explainable and Interpretable
- Privacy-Enhanced
- Fair and Bias-Managed
The second part of the AI RMF showcases four important functions related to AI risk management: Govern, Map, Measure & Manage.
The AI RMF 1.0 also encourages using AI risk management profiles to show how risk can be managed throughout the AI lifecycle or in specific applications using real-life examples. These are categorized as follows:
- Use-case profiles detail how AI risks are being managed in a particular industry or sector (such as hiring or fair housing).
- Temporal profiles show current and target AI risk management outcomes within a given sector, industry, organization, or application context.
- Cross-sectoral profiles explain how AI system risks can overlap when used in various use cases or sectors.
The NIST AI RMF is also accompanied by the AI RMF Playbook, a companion resource that focuses on four core functions of AI risk management.
The NIST framework - Likely to become the de facto standard followed in the US
Brad Fisher, CEO of Lumenova AI, shared support for the framework and outlined the challenges ahead of the AI industry:
“The NIST AI RMF is comprehensive and balanced – very high quality. While other frameworks exist, the NIST framework will likely become the de facto standard followed in the US and, to a large extent, globally.
The challenge will come in implementation because many of the objectives that it raises are not easily answered and will require thorough evaluation by business leaders, AI leaders and AI practitioners to evaluate the complexities from a policy perspective, recognizing the implications on various stakeholders, such as customers, employees, and others. This won’t be quick and it won’t be easy.
Another challenge is the fact that the AI RMF is intended for voluntary use. As in many things that are voluntary, early adopters that use this guidance are likely the ones who are better able to comply with its provisions, whereas those who choose not to adopt it may be those with more problematic situations – in other words, those that really need it. Since the creation of this RMF is based on a Congressional action, a follow-on Congressional action is necessary to make sure that all companies meeting a certain threshold are required to comply with this guidance.”
The benefits of applying NIST’s risk management framework
Organizations that adopt the NIST framework can expect to benefit from a number of advantages.
Firstly, the framework provides a common language for discussing AI risk management, which can help to promote consistency and coherence across different organizations.
Secondly, the framework provides a structured approach to managing AI risks, which can help organizations to make more informed decisions about their use of AI.
Lastly, the framework provides guidelines for incorporating responsible practices in the implementation of AI, aiming to guarantee that AI systems are created and utilized ethically.
Key takeaways
NIST’s AI Risk Management Framework is an important resource for organizations looking to manage and mitigate the risks associated with AI, having been developed over 18 months and reflecting about 400 sets of formal comments that NIST received from more than 240 different organizations on draft versions of the framework.
Adopting the NIST framework is an important step towards promoting the responsible and ethical use of AI, this standardization effort fostering a greater alignment across the landscape of ML operations.
Lumenova facilitates the risk management process, equipping your team with knowledge and expertise
Lumenova AI automates the complete Responsible AI lifecycle and helps organizations develop, document and track progress based on various risk management frameworks, including NIST’s AI RMF.
Our platform’s automated process connects business objectives to technical assessments, so you can save time, resources, and drive value through AI.
Get in touch with us to find out more.