New York’s Acceptable Use of AI Policy: A Focus on Privacy Obligations

New York

Share This Post

New York State has taken a step towards ensuring responsible and ethical use of Artificial Intelligence (AI) technologies in the public sector through its newly established Acceptable Use of Artificial Intelligence Technologies policy. This policy, issued by the New York State Office of Information Technology Services (ITS), aims to guide state government in leveraging AI to drive innovation and operational efficiencies while safeguarding privacy, managing risks, and promoting accountability, safety, and equity.

In this article, we dive into the scope, requirements, with an emphasis on the privacy obligations set out in this policy.

Scope of Application

The policy applies to New York state government, including employees and third parties that use or access any IT resources the entity has responsibility over, regardless of whether it is hosted by a third party on its behalf. This also includes access or use by contractors on consultants as well.

The Policy covers various AI technologies, including machine learning, natural language processing, computer vision, and generative AI, whether new or existing, but excludes basic calculations and automation, or pre-recorded “if this then that (IFTT)” response systems.

Requirements

Under the policy, government entities are required to seek approval for the use of AI systems from legal and operational leadership, including ethics officers. A key demand the policy makes is human oversight over and the documentation of outcomes, decisions, and methodologies of AI systems. Fully automated decisions are prohibited.

Fairness, equity, explainability, and transparency are also a requirement, however, in contrast to the human-oversight requirement which is a ‘must,’ these requirements are merely a ‘should.’

Government entities are further required to conduct risk assessments for each AI system, addressing security, privacy, legal, reputational, and competency risks. These assessments should align with the National Institute of Standards and Technology (NIST) AI Risk Management Framework.

An AI inventory will be kept by ITS, requiring all government entities to inform ITS of AI systems they use within 180 days of the coming-into-force of the Policy, i.e., starting Jan. 8, 2024. This inventory is to be made public where practicable.

The Policy also requires ongoing re-assessments and re-training in light of the rapidly evolving AI landscape.

The most flexibility provided by the Policy concerns IP law. Here, the Policy requires the government entities to “confer with their counsel’s office” when it comes to using copyrighted materials as AI input, citing the ongoing evolution of the legal landscape in this regard.

Privacy Obligations

A significant focus of the Policy is on privacy obligations related to AI systems. Government entities are required to develop policies and controls to ensure the appropriate use of AI concerning personally identifiable, confidential, or sensitive information. The examples of privacy controls provided are:

  • A privacy impact assessment;
  • Privacy-oriented settings, including data minimization, such as only processing data that is necessary during the development and use the AI system;
  • Data retention settings that follow the requirements of federal and state standards;
  • Ensuring the accuracy of data put into the AI system and the AI system’s outputs;
  • Disposal of the data once the purpose of using the data has been fulfilled, when possible, in compliance with applicable state and federal laws;
  • Providing data subjects with control and transparency in relation to data processing.

Moreover, according to the Policy, government entities must adhere to information security policies and standards, naming in particular encryption and pseudonymization, to protect data throughout its lifecycle.

An illustrative example provided in the Policy highlights how seriously the privacy obligations are taken: “Inputting personally identifiable, confidential, or sensitive information into an AI system where that AI system uses that information to build upon its model and/or may disclose that information to an unauthorized recipient” constitutes an unacceptable use of AI. Given the well-established fact that current AI models ‘memorize’ their training data and occasionally spew it out in production, prohibiting this use of AI systems arguably poses significant constraints on government entities.

Conclusion

New York’s Acceptable Use of AI Policy represents a step towards promoting responsible AI adoption while safeguarding privacy and ensuring accountability. Expanding the scope to any vendors that make their AI systems available to government entities means that its effects will likely be felt outside of the public sectors as well.

Private AI can facilitate compliance with the Policy by removing personally identifiable information from data sets used to train AI systems. Try it on your own data here or request an API key.

 

Subscribe To Our Newsletter

Sign up for Private AI’s mailing list to stay up to date with more fresh content, upcoming events, company news, and more! 

More To Explore

Download the Free Report

Request an API Key

Fill out the form below and we’ll send you a free API key for 500 calls (approx. 50k words). No commitment, no credit card required!

Language Packs

Expand the categories below to see which languages are included within each language pack.
Note: English capabilities are automatically included within the Enterprise pricing tier. 

French
Spanish
Portuguese

Arabic
Hebrew
Persian (Farsi)
Swahili

French
German
Italian
Portuguese
Russian
Spanish
Ukrainian
Belarusian
Bulgarian
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
Greek
Hungarian
Icelandic
Latvian
Lithuanian
Luxembourgish
Polish
Romanian
Slovak
Slovenian
Swedish
Turkish

Hindi
Korean
Tagalog
Bengali
Burmese
Indonesian
Khmer
Japanese
Malay
Moldovan
Norwegian (Bokmål)
Punjabi
Tamil
Thai
Vietnamese
Mandarin (simplified)

Arabic
Belarusian
Bengali
Bulgarian
Burmese
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
French
German
Greek
Hebrew
Hindi
Hungarian
Icelandic
Indonesian
Italian
Japanese
Khmer
Korean
Latvian
Lithuanian
Luxembourgish
Malay
Mandarin (simplified)
Moldovan
Norwegian (Bokmål)
Persian (Farsi)
Polish
Portuguese
Punjabi
Romanian
Russian
Slovak
Slovenian
Spanish
Swahili
Swedish
Tagalog
Tamil
Thai
Turkish
Ukrainian
Vietnamese

Rappel

Testé sur un ensemble de données composé de données conversationnelles désordonnées contenant des informations de santé sensibles. Téléchargez notre livre blanc pour plus de détails, ainsi que nos performances en termes d’exactitude et de score F1, ou contactez-nous pour obtenir une copie du code d’évaluation.

99.5%+ Accuracy

Number quoted is the number of PII words missed as a fraction of total number of words. Computed on a 268 thousand word internal test dataset, comprising data from over 50 different sources, including web scrapes, emails and ASR transcripts.

Please contact us for a copy of the code used to compute these metrics, try it yourself here, or download our whitepaper.