AI in Government: The fine balance between applying and regulating AI (Part 2/3)

Share This Post

Ethical AI in Government

Government agencies face a unique challenge when it comes to regulating AI while simultaneously using it. They are bound by legal and ethical obligations to protect citizens’ rights and privacy; therefore, when employing AI, they must do so responsibly to uphold democratic values and protect individual rights. In the first part of this series, we outlined the ethical AI framework. But what would that look like for government entities? Expert Darrell M. West, Senior Fellow at the Center for Technology Innovation, outlines six key steps:

  1. Concrete Codes of Conduct: Government agencies need clear codes of conduct that outline major ethical standards, values, and principles. These should include fairness, transparency, privacy, and human safety.
  2. Operational Tools: Employees involved in AI development must have access to operational tools that promote ethics and fight bias. These tools should be designed with input from ethicists, social scientists, and legal experts to ensure impartial and safe decision-making.
  3. Evaluation Benchmarks: Clear evaluation benchmarks and metrics should be established to assess AI systems’ performance and adherence to ethical principles. These metrics should consider both substantive and procedural fairness.
  4. Technical Standards: Governments should adopt technical standards that guide AI development to prevent idiosyncratic designs and ensure consistent safeguards, especially in areas like fairness and equity.
  5. Pilot Projects and Sandboxes: Government agencies should conduct pilot projects and establish sandboxes for experimenting with AI deployments. This allows testing AI in a controlled environment, minimizing risks, and learning from initial tests.
  6. Workforce Capacity: A well-trained workforce with a mix of technical and non-technical skills is essential. Government agencies should invest in professional development opportunities to keep their employees updated on emerging technologies.

What are the applications of AI in government?

Recent data shows that over 77% of companies are either using or exploring the use of AI within their business. As we discussed previously, AI has the power to streamline processes – and that is already impacting customer’s expectations. 

According to a Salesforce research report, 83% of consumers expect immediate engagement when they contact a company, while 73% expect companies to understand their unique needs and expectations. Nearly 60% of all customers want to avoid customer service altogether, preferring to resolve issues with self-service features. Naturally, it influences the public sector: Citizens today expect seamless digital interactions with government services, similar to their experiences in the private sector.

AI undoubtedly has the power to change government-citizen relations and aid policymakers in their decisions. “Studies have shown that citizen’s digital experience with government services is a large predictor of trust in the government,” says John Weigelt, National Technology Officer at Microsoft Canada. “Artificial Intelligence enabled services delivery, as part of government’s digital transformation, and helps ensure that constituents get the right outcomes to their interactions with governments.” 

The possibilities are endless – some of the main applications of AI in government relations are:

  • Enhancing Digital Interactions with Public Services – Generative AI is able to not only analyze data but generate content according to the context of a certain interaction. In government, it can ensure a greater coverage of services as well as customization.
  •  
  • Back Office Automation: AI technologies like robotic process automation (RPA), natural language processing (NLP), and computer vision are digitizing paper documents and accelerating claims processing. This not only reduces paperwork but also enhances the speed and accuracy of service delivery
  •  
  • Data-based policymaking: AI enables policymakers to make more informed decisions based on data. It offers insights into industry regulation, social and environmental impacts, and citizen perceptions of government policies. This results in more effective and well-informed policymaking across all government sectors.
  •  
  • Health and Environmental Predictions: AI can be used to help identify patterns, and impacts related to public health and climate change, as well as predict risks of housing and food insecurity. This assists in crafting policies to improve citizens’ quality of life.

There are already successful case studies of AI in governments like Australia, Canada and the United States. For example, Australia’s Taxation Office Chatbot had more than 3 million conversations and was able to resolve 88% of queries on first contact. In the US, Atlanta’s Fire Rescue Department Predictive Analysis was able to accurately predict 73% of fire incidents in a building.

“Empowering employees, finding efficiencies and transforming operations are key pillars of government digital transformation efforts,” says Weigelt. “Artificial intelligence helps employees gain faster and more accurate access to knowledge, speed and streamline decision making and provides a platform to reimagine how government operations are performed.”

Regulating AI: What’s in store

From the makers of GDPR: The proposed EU AI Act

AI has altered the way we interact with technology. Thierry Breton, the EU’s Commissioner for Internal Market, aptly noted, “[AI] has been around for decades but has reached new capacities fueled by computing power.”

Recognizing the transformative potential of AI and the need to mitigate its inherent risks, the EU AI Act represents a pivotal response to the transformative potential and risks of AI. It underscores the global significance of AI regulation and the need for international collaboration to address the challenges and opportunities posed by AI technologies.

First introduced in April 2021 by the European Commission, the EU AI Act’s implications extend far beyond European borders. Just as their General Data Protection Regulation (GDPR) has influenced the development of data protection laws in other countries, as the world’s first comprehensive regulatory framework for AI, it sets a precedent for responsible AI governance worldwide. 

However, the Act has not been without its share of criticisms. Some European companies have voiced concerns about its potential impact on competitiveness and technological sovereignty. Nevertheless, the proposed legislation signifies a significant step towards achieving a harmonious balance between innovation, ethics, and accountability in the world of AI.

What it entails

The EU AI Act adopts a risk-based approach to AI regulation, categorizing AI systems based on the level of risk they pose to users. This classification serves as the foundation for imposing varying degrees of regulation on AI technologies. Three primary risk categories emerge:

  • Unacceptable Risk: This category encompasses AI systems that pose a direct threat to individuals or specific vulnerable groups. Examples include AI-driven devices that manipulate children into engaging in harmful behaviours or social scoring systems that categorize individuals based on personal characteristics. The EU takes a stringent stance against such AI systems, proposing an outright ban.
  •  
  • High Risk: AI systems falling under this category negatively impact safety or fundamental rights. They include AI used in products covered by the EU’s product safety legislation, such as toys, aviation, medical devices, and more. Additionally, certain specific areas like biometric identification, critical infrastructure management, and law enforcement require registration in an EU database. All high-risk AI systems undergo a rigorous assessment before market placement and throughout their lifecycle.
  •  
  • *Limited Risk: AI systems posing limited risk must comply with transparency requirements, ensuring users are informed about AI-generated content. This includes AI systems responsible for generating or manipulating image, audio, or video content, such as deepfakes.

*On the earlier version of the Act. More information is below.

How it affects Generative AI

Generative AI tools, like ChatGPT, are not exempt from the regulations proposed in the EU AI Act. On June 14, 2023, the European Parliament passed a draft law with relevant amendments to the EU AI Act after the immense adoption of ChatGPT by both ordinary customers and organizations. The amendment broadened the EU AI Act scope to encompass new areas of concern, including environmental impact and effects on political campaigns. 

Another noteworthy aspect is the introduction of “foundation models” and Article 28b, which exclusively addresses the responsibilities of providers of such models. A “foundation model” refers to an AI model trained on extensive and diverse data, designed to produce a wide range of outputs, and adaptable to various specific tasks – much like ChatGPT.

Providers of foundation models will now have additional transparency requirements, including:

  • Disclose that the content was generated by AI.
  •  
  • Be designed to prevent the generation of illegal content.
  •  
  • Publish summaries of copyrighted data used for training.
  •  
  • These measures aim to ensure accountability and transparency in AI-generated content, protecting users and society at large from the potential misuse of AI technologies.

This shift is particularly intriguing considering that chatbots and “deepfakes” were previously considered low-risk and subjected to minimal transparency obligations in earlier versions of the Act.

What about the US?
Following the EU AI Act, The White House Office of Science and Technology Policy has proposed a “Blueprint for an AI Bill of Rights” to protect the American public in the age of artificial intelligence. The Blueprint contains five principles, each of which includes a technical companion that provides guidance for responsible implementation: 
  • Safe and Effective Systems: The first principle emphasizes the need to protect individuals from AI systems that are unsafe or ineffective.
  •  
  • Algorithmic Discrimination Protections: This principle aims to prevent discrimination by algorithms, ensuring that AI systems are designed and used in an equitable manner.
  •  
  • Data Privacy: The third principle addresses the importance of protecting individuals from abusive data practices, giving people agency over how their data is used.
  •  
  • Notice and Explanation: People should be informed when an automated system is being used and should understand how and why it influences outcomes that affect them.
  •  
  • Alternative Options: Individuals should have the ability to opt-out when appropriate and access assistance when encountering problems with AI systems.

The release of the Blueprint has generated mixed reactions. Some experts argue that the Blueprint does not go far enough and lacks the checks and balances present in the EU AI Act. On the other hand, others fear that regulation could stifle innovation.

Nevertheless, in December 2020, the US Federal government signed Executive Order 13960, which emphasizes the benefits of AI for government operations, public services, and efficiency while highlighting the need to maintain public trust and protect privacy and civil rights.

The order sets forth principles for AI use in non-national security contexts, stressing accountability, transparency, and adherence to laws and values, and mandates agencies to inventory their AI use cases, ensure consistency with the principles, and share this information with the public.

Both the European Union and the United States’ journey to regulate AI it’s just the beginning, but an important step toward ensuring that AI benefits society while safeguarding individuals’ rights and well-being. The path forward requires a delicate balance between innovation and regulation, with an eye on the evolving global landscape of AI governance.

 

“AI in Government” series, in partnership with Microsoft Canada.

Part I: Navigating the Future of AI: Responsibility, Regulation, and Generative AI Impact

Part II: AI in Government: The fine balance between applying and regulating AI

Part III: Generative AI Impact on Governments

Additional resources:

Read Brad Smith’s blog “How do we best govern AI?

Read Diana Parker’s blog to learn 4 steps to advance your AI journey with Microsoft for Government.

Learn more about the EU AI Act

Learn more about the proposed AI Legislation in Canada

Sign up for our newsletter to stay up to date on AI Regulation

Get Private AI on Azure Marketplace.

 

Subscribe To Our Newsletter

Sign up for Private AI’s mailing list to stay up to date with more fresh content, upcoming events, company news, and more! 

More To Explore

Privacy Management
Blog

End-to-end Privacy Management

End-to-end privacy management refers to the process of protecting sensitive data throughout its entire lifecycle, from the moment it is collected to the point where

Read More »

Download the Free Report

Request an API Key

Fill out the form below and we’ll send you a free API key for 500 calls (approx. 50k words). No commitment, no credit card required!

Language Packs

Expand the categories below to see which languages are included within each language pack.
Note: English capabilities are automatically included within the Enterprise pricing tier. 

French
Spanish
Portuguese

Arabic
Hebrew
Persian (Farsi)
Swahili

French
German
Italian
Portuguese
Russian
Spanish
Ukrainian
Belarusian
Bulgarian
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
Greek
Hungarian
Icelandic
Latvian
Lithuanian
Luxembourgish
Polish
Romanian
Slovak
Slovenian
Swedish
Turkish

Hindi
Korean
Tagalog
Bengali
Burmese
Indonesian
Khmer
Japanese
Malay
Moldovan
Norwegian (Bokmål)
Punjabi
Tamil
Thai
Vietnamese
Mandarin (simplified)

Arabic
Belarusian
Bengali
Bulgarian
Burmese
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
French
German
Greek
Hebrew
Hindi
Hungarian
Icelandic
Indonesian
Italian
Japanese
Khmer
Korean
Latvian
Lithuanian
Luxembourgish
Malay
Mandarin (simplified)
Moldovan
Norwegian (Bokmål)
Persian (Farsi)
Polish
Portuguese
Punjabi
Romanian
Russian
Slovak
Slovenian
Spanish
Swahili
Swedish
Tagalog
Tamil
Thai
Turkish
Ukrainian
Vietnamese

Rappel

Testé sur un ensemble de données composé de données conversationnelles désordonnées contenant des informations de santé sensibles. Téléchargez notre livre blanc pour plus de détails, ainsi que nos performances en termes d’exactitude et de score F1, ou contactez-nous pour obtenir une copie du code d’évaluation.

99.5%+ Accuracy

Number quoted is the number of PII words missed as a fraction of total number of words. Computed on a 268 thousand word internal test dataset, comprising data from over 50 different sources, including web scrapes, emails and ASR transcripts.

Please contact us for a copy of the code used to compute these metrics, try it yourself here, or download our whitepaper.