Artificial Intelligence Governance

Issued Date
February 16, 2023
Technology and Innovation, New York City Office of


To assess New York City’s progress in establishing an appropriate governance structure over the development and use of artificial intelligence (AI) tools and systems. The audit covered the period from January 2019 through November 2022.

About the Program

AI-powered tools have an increasingly vital role in industry operations, including agriculture, health care and medicine, manufacturing, transportation, and government, enabling entities to operate more intelligently, more productively, and more competitively. Yet even as AI generates value, it is also giving rise to a host of unwanted, and sometimes serious, consequences.

New York City (NYC or City) and some of its agencies have been using AI to aid their operations. For example, the NYC Police Department (NYPD) uses AI to power its facial recognition technologies, which help the police identify unknown individuals, and the Administration for Children’s Services (ACS) uses AI to predict children who are most likely to experience future harm in order to prioritize cases for quality assurance review. To ensure transparency in the use of AI and similar tools, NYC has enacted laws and policies to require the reporting of such technology use.

AI tools and systems pose unique challenges in accountability as their inputs and operations are not always visible. The U.S. Government Accountability Office (GAO) noted that a lack of transparency reduces effective oversight in identifying errors, misuse, and bias. Therefore, it is essential to establish governance structures over AI to ensure that its use is transparent and accurate and does not generate harmful, unintended consequences. Without adequate governance and oversight over the use of AI, misguided, outdated, or inaccurate outcomes can occur and may lead to unfair or ineffective outcomes for those who live, work, or visit NYC.

Key Findings

NYC does not have an effective AI governance framework. While agencies are required to report certain types of AI use on an annual basis, there are no rules or guidance on the actual use of AI. Consequently, City agencies developed their own, divergent approaches. We sampled four City agencies: NYPD, ACS, Department of Education (DOE), and Department of Buildings (DOB). Based on our survey results, we found ad hoc and incomplete approaches to AI governance, which do not ensure that the City’s use of AI is transparent, accurate, and unbiased and avoids disparate impacts.

Some agencies have identified key risks and created processes to address those risks. Other agencies have not created any AI-specific policies or taken other steps toward effective AI governance. For example, ACS has taken specific steps to address possible bias in its Severe Harm Predictive Model by eliminating certain types of racial and ethnic data and testing the model’s output against benchmarks. However, DOE does not require any steps to determine whether the AI tools available to its schools have been evaluated to address potential bias. Some agencies perform certain activities that partially address components of AI governance, such as identifying appropriate use, intended outcomes, data governance, and potential impacts, but do so because of laws created to address issues not specific to AI. Further, the NYPD created impact and use policies for its surveillance tools in compliance with the NYC Public Oversight of Surveillance Technology Act. The impact and use policy of its facial recognition technology acknowledges the potential bias of facial recognition, particularly against groups other than white males. It further states that NYPD only uses facial recognition technology that has been evaluated by the National Institute of Standards and Technology (NIST). However, NYPD did not review NIST’s evaluation of the facial recognition technology it used, nor did it establish what level of accuracy would be acceptable. NYPD officials explained that any potential match is reviewed by multiple individuals to help mitigate potential accuracy and bias issues.

Furthermore, NYC’s initial governance requirements of algorithmic tools, which include AI, were not fully met. The Mayor’s Office of Operation’s Algorithms Management and Policy Officer (AMPO) was required by Executive Order 50 of 2019 to establish a reporting framework of algorithmic tools, policies, and protocols to guide the City and its agencies in the fair and responsible use of such tools, a process for individuals to learn about the City’s use of these tools, a complaint resolution process for those impacted by such use, and a public education strategy. AMPO created a reporting framework for agencies to report, published reported tools, and held several public engagement sessions to engage the public. However, in January 2022, AMPO was discontinued by Executive Order 3, which removed those requirements and placed the responsibility of algorithmic and AI management within the NYC Office of Technology and Innovation (OTI). At that time, AMPO had not established policies and protocols to guide the City and its agencies in the fair and responsible use of such tools or a means for the City to resolve complaints made by individuals regarding algorithmic impacts. In addition, we identified instances where agency tools were not reported or included in the public listing of tools.

Key Recommendations

  • Use relevant AI governance frameworks to assess the risks of AI used by City agencies.
  • Review past AMPO policies to identify areas that need to be strengthened by OTI.
  • Implement policies to create an effective AI governance structure.

Kenrick Sifontes

State Government Accountability Contact Information:
Audit Director:Kenrick Sifontes
Phone: (212) 417-5200; Email: [email protected]
Address: Office of the State Comptroller; Division of State Government Accountability; 110 State Street, 11th Floor; Albany, NY 12236