NIST seeks input on AI danger management framework
The National Institute of Expectations and Engineering is searching for comments on producing an Synthetic Intelligence Danger Management Framework (AI RMF) that would boost organizations’ capacity to incorporate trustworthiness into the style, progress and use of AI methods.
“The Framework aims to foster the progress of modern approaches to handle qualities of trustworthiness like precision, explainability and interpretability, dependability, privacy, robustness, protection, safety (resilience), and mitigation of unintended and/or unsafe bias, as very well as of dangerous takes advantage of,” NIST wrote in a July 28 ask for for information and facts posted in the Federal Register.
NIST wishes enter on how the framework should handle worries in AI hazard management, such as identification, assessment, prioritization, response and communication of AI threats how organizations at this time evaluate and manage AI hazard, together with bias and hazardous outcomes and how AI can be developed so that it lessens the opportunity destructive effects on persons and modern society, the RFI explained.
Strategies on what popular definitions and characterizations for the areas of trustworthiness should be submitted as nicely as most effective practices that may align with an AI danger framework.
NIST ideas to create its AI-RMF with the same practices it utilized for the commonly embraced 2014 Cybersecurity Framework and the 2020 Privateness Framework.
Responses are because of Aug. 19. Go through the total RFI in this article.
About the Creator


Shourjya Mookerjee is an affiliate editor for GCN and FCW. He is a graduate of the University of Maryland, College or university Park, and has composed for Vox Media, Fandom and a range of capital-spot information outlets. He can be arrived at at [email protected] – or you can find him ranting about sports activities, cinematography and the great importance of nearby journalism on Twitter @byShourjya.