The USA’s National Artificial Intelligence Initiative (NAII) was born out of the National AI Initiative Act of 2020 (DIVISION E, SEC. 5001) which became law in the United States on January 1, 2021. The aim of the NAII is to develop a coordinated program across the US Federal government to drive leadership in AI research and development, education, and use of trustworthy AI in public and private sectors.
Additionally, the initiative aims to integrate AI systems across all sectors of the American economy and society.
The NAII plans to establish cooperation between all US Departments and Agencies, together with academia, industry, non-profits, and civil society organisations to achieve its goals via 6 key strategic pillars:
The White House Office of Science and Technology Policy has put forward 10 principles to consider when formulating regulatory and non-regulatory approaches to the development and use of AI in the USA:
The USA’s proposed approach to AI risk assessment can be classified into 3 categories: assessment, independence and accountability, and continuous review.
Businesses will be required to conduct algorithm impact assessments to identify the associated risks. The primary objective is to mandate clear descriptions of each risk generated by a particular AI system and then clearly indicate how these risks are being minimised and resolved.
The US State of Virginia’s Consumer Data Protection Act (2021) already requires assessments for some high-risk algorithms. In the EU, the GDPR currently requires similar impact assessments for high-risk processing of personal data.
Impact assessments must be conducted by independent third-party players to ensure complete transparency and accountability. Independent reviews and assessments is a cornerstone of transparent AI regulation.
As AI evolves at a rapid rate, continuous review and impact assessment of AI systems is crucial in remaining on the edge of compliance at the rate of innovation. Thus, no single assessment will suffice.