Dioptra - New Open-Source AI Model Risk Testing Tool Released by NIST

Growing AI adoption has prompted the National Institute of Standards and Technology (NIST) to introduce Dioptra, a new open-source software tool for evaluating security risks in AI models and gauging the NIST AI Risk Management Framework's "measure" functionality.

Dioptra, which has been made available on GitHub, could be used to measure the extent of evasion, poisoning and oracle attacks against various AI models, including those for image classification and speech recognition, according to NIST. The Dioptra tool was originally built to measure attacks against image classification models but could also be adapted to assess other ML applications such as speech recognition models.

User feedback has helped shape Dioptra and NIST plans to continue to collect feedback and improve the tool. Such a tool has been released by NIST alongside a new dual-use foundation model risk management draft from the agency's AI Safety Institute, which will be open for public comments until September 9. Read more about this story on our LinkedIn page

We use cookies to offer you a better browsing experience and to analyze site traffic. By using our site, you consent to our use of cookies.

Essential Cookies
Site Analytics