New York State has taken a step towards ensuring responsible and ethical use of Artificial Intelligence (AI) technologies in the public sector through its newly established Acceptable Use of Artificial Intelligence Technologies policy. This policy, issued by the New York State Office of Information Technology Services (ITS), aims to guide state government in leveraging AI to drive innovation and operational efficiencies while safeguarding privacy, managing risks, and promoting accountability, safety, and equity.
In this article, we dive into the scope, requirements, with an emphasis on the privacy obligations set out in this policy.
Scope of Application
The policy applies to New York state government, including employees and third parties that use or access any IT resources the entity has responsibility over, regardless of whether it is hosted by a third party on its behalf. This also includes access or use by contractors on consultants as well.
The Policy covers various AI technologies, including machine learning, natural language processing, computer vision, and generative AI, whether new or existing, but excludes basic calculations and automation, or pre-recorded “if this then that (IFTT)” response systems.
Requirements
Under the policy, government entities are required to seek approval for the use of AI systems from legal and operational leadership, including ethics officers. A key demand the policy makes is human oversight over and the documentation of outcomes, decisions, and methodologies of AI systems. Fully automated decisions are prohibited.
Fairness, equity, explainability, and transparency are also a requirement, however, in contrast to the human-oversight requirement which is a ‘must,’ these requirements are merely a ‘should.’
Government entities are further required to conduct risk assessments for each AI system, addressing security, privacy, legal, reputational, and competency risks. These assessments should align with the National Institute of Standards and Technology (NIST) AI Risk Management Framework.
An AI inventory will be kept by ITS, requiring all government entities to inform ITS of AI systems they use within 180 days of the coming-into-force of the Policy, i.e., starting Jan. 8, 2024. This inventory is to be made public where practicable.
The Policy also requires ongoing re-assessments and re-training in light of the rapidly evolving AI landscape.
The most flexibility provided by the Policy concerns IP law. Here, the Policy requires the government entities to “confer with their counsel’s office” when it comes to using copyrighted materials as AI input, citing the ongoing evolution of the legal landscape in this regard.
Privacy Obligations
A significant focus of the New York AI policy is on privacy obligations related to AI systems. Government entities are required to develop policies and controls to ensure the appropriate use of AI concerning personally identifiable, confidential, or sensitive information. The examples of privacy controls provided are:
- A privacy impact assessment;
- Privacy-oriented settings, including data minimization, such as only processing data that is necessary during the development and use the AI system;
- Data retention settings that follow the requirements of federal and state standards;
- Ensuring the accuracy of data put into the AI system and the AI system’s outputs;
- Disposal of the data once the purpose of using the data has been fulfilled, when possible, in compliance with applicable state and federal laws;
- Providing data subjects with control and transparency in relation to data processing.
Moreover, according to the New York Ai Policy, government entities must adhere to information security policies and standards, naming in particular encryption and pseudonymization, to protect data throughout its lifecycle.
An illustrative example provided in the Policy highlights how seriously the privacy obligations are taken: “Inputting personally identifiable, confidential, or sensitive information into an AI system where that AI system uses that information to build upon its model and/or may disclose that information to an unauthorized recipient” constitutes an unacceptable use of AI. Given the well-established fact that current AI models ‘memorize’ their training data and occasionally spew it out in production, prohibiting this use of AI systems arguably poses significant constraints on government entities.
Conclusion
Acceptable Use of New York AI Policy represents a step towards promoting responsible AI adoption while safeguarding privacy and ensuring accountability. Expanding the scope to any vendors that make their AI systems available to government entities means that its effects will likely be felt outside of the public sectors as well.
Private AI can facilitate compliance with the Policy by removing personally identifiable information from data sets used to train AI systems. Try it on your own data here or request an API key.