Main Banner

NEWS from the Office of the New York State Comptroller
Contact: Press Office 518-474-4015

DiNapoli Op Ed in Times Union

New York Needs Better Oversight of Government AI Systems

May 28, 2025

The Times Union published an op ed by New York State Comptroller Thomas P. DiNapoli on the need for tougher oversight of governments use of Artificial Intelligence (AI) systems:

Artificial intelligence has the potential to transform how government operates and delivers services. New York state agencies have used AI companions to help seniors combat social isolation, and the Department of Motor Vehicles is using facial recognition technology to deter identity fraud.

These significant technological advances come with profound ethical, legal and societal questions that have to be addressed. If they aren’t, the very New Yorkers they are meant to help could be put in harm’s way.

That is why my office has called on New York’s leaders to enact robust oversight over government’s uses of AI. We need to be assured that these technologies are used safely, fairly and responsibly. In a recent audit that examined the state’s use of AI, my auditors found clear evidence that New York’s use of AI is running well ahead of the state’s ability to manage it.

AI governance must be baked into the process of agencies’ adoption and use of various technologies. This means having rules, guidelines and practices in place that promote transparency from the start, address the flaws and biases that can come with AI, and instill public accountability.

This year, New York strengthened its AI governance framework with a new law requiring state agencies to disclose the AI tools they use and directing the Office of Information Technology Services to keep a public inventory of AI systems. The law also includes protections for employees.

The state has also recently updated its AI acceptable-use policy, which requires agencies to perform a risk assessment of the AI systems they plan to use, assign a human to oversee it if it makes a decision that affects the public, check its performance and document the outcomes of using it.

These are positive developments, but they are not enough.

Our audit found that agencies lack specific procedures to test the accuracy and fairness of their AI systems. That leaves residents vulnerable to harmful automated decisions. One agency that uses voice biometric software to validate people’s identities had never tested its system’s accuracy. Another agency said it wasn’t using AI because it did not think facial recognition technology was AI - even though it had a facial recognition system that met the state’s definition of AI.

The lack of central oversight, inadequate guidance for agencies, the absence of an AI inventory and insufficient training are at the root of these problems. The result: State agencies are left largely alone to adopt AI and create the rules to govern it.

Unsurprisingly, this creates serious vulnerabilities. For example, when we asked what happens to the data created and collected by the AI companion devices given to seniors, officials said the vendor — not the state agency — owns the data on their performance and the recordings of their interactions.

Failure to address who owns the data creates significant privacy risks. Agencies could lose the ability to delete the data; the vendor could use the data to develop new products or train models, or could sell it without agency consent; and the data could be exposed to security breaches and the loss of personal information.

Establishing good AI governance will require efforts on several fronts. We need a framework that sets clear boundaries for using AI and guidelines and training that help agencies understand and adhere to these standards. Systems must be tested, and not just by the vendor selling the technology. Testing and oversight have to be continuous as AI evolves.

Regular and independent audits will verify that agencies are living up to standards, checking for vulnerabilities and using what they learn to drive improvements. My office will look at the AI systems used by state agencies to see if they are working and verify that vendors are playing by the rules.

The new law will help increase transparency and accountability, but it will only be effective if it is implemented consistently and backed up by rigorous oversight. Audits help verify that written principles are upheld in practice and that these systems benefit New Yorkers by making government more effective at delivering services.

This strategy not only safeguards against current risks; it also prepares governments to adapt to future advancements and builds confidence that the state is using AI responsibly, ethically and transparently.


Times Union Op Ed 
Commentary: New York Needs Better Oversight of Government AI Systems

Audit 
New York State Artificial Intelligence Governance