NIST AI Framework

Understanding the NIST Artificial Intelligence Risk Management Framework: Part 1 of 6

The National Institute of Standards and Technology (NIST) released a comprehensive guide named the Artificial Intelligence Risk Management Framework (AI RMF 1.0), to help organizations navigate the complexities of AI risk management. Over the next several blog posts, I will summarize the document and the key takeaways. Remember regulation of AI is occurring and will continue to get more fierce, necessitating a response from anyone working with or creating AI systems:

  • Framing Risk. The document emphasizes understanding and addressing risks, impacts, and harms associated with AI. It discusses challenges in AI risk management, including risk measurement, risk tolerance, risk prioritization, and organizational integration and management of risk.
  • Audience. The framework is designed for a broad audience, including AI designers, developers, deployers, and other stakeholders involved in the AI lifecycle.
  • AI Risks and Trustworthiness. The document outlines several trustworthiness characteristics of AI systems. These include being valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful biases managed.
  • Effectiveness of the AI RMF. The document discusses the effectiveness of the AI RMF, emphasizing the need for a comprehensive approach to risk management that balances tradeoffs among the trustworthiness characteristics.
  • AI RMF Core. The core of the framework describes four specific functions to help organizations address the risks of AI systems in practice. These functions – GOVERN, MAP, MEASURE, and MANAGE – are broken down further into categories and subcategories.
  • AI RMF Profiles. The document also discusses AI RMF profiles, which provide additional resources related to the framework.

The AI RMF 1.0 is a result of collaboration between NIST and the private and public sectors, consistent with broader AI efforts called for by the National AI Initiative Act of 2020, the National Security Commission on Artificial Intelligence recommendations, and the Plan for Federal Engagement in Developing Technical Standards and Related Tools.

In Part 2 I’ll tackle the points around framing risk of AI. Please book time if you have any questions.

Recent Posts

GPT and Predictive Analytics AI
navigating ai

Squark is a no-code AI as a Service platform that helps data-literate business users make better decisions with their data. Squark is used across a variety of industries & use cases to uncover AI-driven insights from tabular and textual data, prioritize decisions, and take informed action. The Squark platform is designed to be easy to use, accurate, scalable, and secure.

Copyright © 2023 Squark. All Rights Reserved | Privacy Policy