New York City lacks needed guidelines and policies for agencies’ use of artificial intelligence (AI), leaving it vulnerable to misguided, inaccurate or biased outcomes in several programs that can directly impact New Yorkers’ lives, an audit released today by New York State Comptroller Thomas P. DiNapoli found.
“Government’s use of artificial intelligence to improve public services is not new. But there needs to be formal guidelines governing its use, a clear inventory of what’s being used and why, and accuracy standards, so that the algorithms that can help train educators, identify potential criminal suspects, prioritize child abuse cases and inspect buildings don’t do more harm than good,” DiNapoli said. “I hope the city’s Office of Technology and Innovation acts on my office’s recommendations to help ensure the AI being used by the city is not at risk of bias or inaccuracies.”
The city’s Office of Technology and Innovation (OTI) was created by the Mayor’s Executive Order 3 in January 2022, with responsibility for the oversight and governance of AI, which in the previous administration had been under an Algorithm Management and Policy Officer (AMPO). When OTI replaced the AMPO, many of the previous administration’s goals for creating guidelines for the fair and responsible use of AI technologies, and handling complaints related to potential harm caused by an agency’s use of them, were incomplete and remain unfinished today. Separately, Local Law 35, enacted Jan. 15, 2022, mandates agencies disclose the algorithmic tools they have used one or more times during the prior calendar year to the Mayor’s Office.
DiNapoli’s audit looked at governance policies on AI use at four agencies: the Administration for Children’s Services (ACS), the Department of Education (DOE), the New York City Police Department (NYPD) and the Department of Buildings (DOB). The audit found significant shortfalls in oversight and risk assessment of artificial intelligence.
Some agencies have taken steps to address risk of biased outcomes. For example, ACS removed certain types of racial and ethnicity data from its Severe Harm Predictive Model, which is designed to identify children most at risk of abuse and prioritize quality assurance reviews of cases. ACS has internal guidelines specific to the use of AI and officials said they were developing a more formal policy to ensure the guidelines are followed.
In contrast, DOE does not require an assessment for its use of AI tools, such as those that use voice technology to analyze classroom discourse patterns, as professional development tools that can help educators improve their communication skills.
Auditors found that the NYPD has created an impact and use policy for certain tools which recognizes the potential for bias in facial recognition software, particularly for groups other than white males. However, it has not set a standard of acceptable accuracy. The impact and use policy describes appropriate use of facial recognition technology, but the guidelines are part of NYPD’s broader surveillance policies and not specific to some of the unique risks posed by AI.
At DOB, officials said they have no governance policies or responsibility to oversee the use of AI, because they do not use it. However, DOB does allow the inspectors that carry out its facade safety inspection program to use AI tools to identify facade defects in addition to any required hands-on inspection. DOB does not require facade inspectors to report whether they used AI tools during inspections, which are required every five years for buildings over six stories tall.
The audit also found that none of the agencies keeps a formal inventory of AI systems and tools they use and have in development, and that only ACS keeps an inventory of all of the data sets that its AI tools use.
None of the agencies had formal policies on the intended use and outcomes of AI tools or systems. As referenced, the NYPD’s policies on use and outcomes are for surveillance tools and not specific to AI.
DiNapoli’s audit recommend that OTI create a governance structure for the use of AI tools and systems by city agencies and assess the risk each poses. It also recommended a review of past AMPO policies to identify areas that need to be strengthened by OTI.
In response to the audit, OTI agreed that more work was needed to further the city’s AI governance and offered examples of what already exists which they will build on. Among the four agencies in the audit, ACS and the NYPD stated that they had governance measures in place or were in the process of addressing risks or concerns raised in the audit findings. DOB stated that it was researching ways to introduce AI in accordance with building codes, but has not adopted a formal framework for its use. Their full responses are available in the audit. DOE did not provide a written response to the audit.
Track state and local government spending at Open Book New York. Under State Comptroller DiNapoli’s open data initiative, search millions of state and local government financial records, track state contracts, and find commonly requested data.