The United States National Institute of Standards and Technology (NIST) and the Department of Commerce are seeking members for the newly-established Artificial Intelligence (AI) Safety Institute Consortium.
Participate in a new consortium for evaluating artificial intelligence (AI) systems to improve the emerging technology’s safety and trustworthiness. Here’s how: https://t.co/HPOIHJyd3C pic.twitter.com/QD3vc3v6vX
— National Institute of Standards and Technology (@NIST) November 2, 2023
In a document published to the Federal Registry on Nov. 2, NIST announced the formation of the new AI consortium along with an official notice expressing the office’s request for applicants with the relevant credentials.
Per the NIST document:
“This notice is the initial step for NIST in collaborating with non-profit organizations, universities, other government agencies, and technology companies to address challenges associated with the development and deployment of AI.”
The purpose of the collaboration is, according to the notice, to create and implement specific policies and measurements to ensure US lawmakers take a human-centered approach to AI safety and governance.
Collaborators will be required to contribute to a list of related functions including the development of measurement and benchmarking tools, policy recommendations, red-teaming efforts, psychoanalysis, and environmental analysis.
These efforts come in response to a recent executive order given by US president Joseph Biden. As Cointelegraph recently reported, the executive order established six new standards for AI safety and security, though none appear to have been legally enshrined.
While many European and Asian states have begun implementing policies governing the development of AI systems, the US has lagged in this arena. President Biden’s executive order and the establishment of the Safety Institute Consortium mark progress towards the establishment of specific policies to govern AI in the US. However, there is still no timeline for the implementation of laws governing AI development or deployment in the US beyond existing policies. Many experts feel these current laws are inadequate when applied to the burgeoning AI sector.
- The US National Institute of Standards and Technology (NIST) and the Department of Commerce have established the Artificial Intelligence (AI) Safety Institute Consortium.
- The purpose of the consortium is to create and implement specific policies and measurements to ensure a human-centered approach to AI safety and governance.
- Collaborators will contribute to various areas, including the development of measurement tools, policy recommendations, red-teaming efforts, and environmental analysis.
- The consortium’s formation is in response to an executive order by US President Joseph Biden, which established six new standards for AI safety and security.
- While other countries have started implementing AI development policies, the US has lagged behind, and there is no clear timeline for implementing AI laws beyond existing policies.