Scroll Top

AI Implementation for Government Agencies – Considerations

In response to Executive Order 14110 on Artificial Intelligence, government agencies face the task of not just crafting but effectively executing AI roadmaps. Below are some considerations that can guide agencies at any stage of their AI journey, focusing on transitioning from roadmap creation to practical execution.

Designate Key Points of Contact

To ensure the effective implementation of AI roadmaps, it’s essential to appoint one or two individuals who possess deep technical expertise in AI, data science, and machine learning combined with strategic insight. As per Executive Order 14110, these key contacts should have the necessary skills, knowledge, training, and expertise to fulfill their responsibilities. They must focus on coordination, innovation, and risk management specifically for the agency’s use of AI, distinct from general data or IT issues. Agencies may choose an existing official, such as the Chief Information Officer (CIO), Chief Data Officer (CDO), or Chief Technology Officer (CTO), provided they have significant expertise in AI and meet the requirements outlined in the Executive Order.

These key individuals facilitate seamless communication between technical teams and executive leadership, ensuring that AI projects are aligned with the agency’s broader mission. They should coordinate AI efforts across the agency, promoting adherence to federal principles and guidelines while maintaining comprehensive awareness of AI projects through annual inventories.

These leaders must be committed to ongoing learning and agile in adapting strategies to technological advancements and policy shifts. They will guide AI initiatives, ensuring projects are technologically robust, strategically aligned, and ethically sound. They should also advocate for equity and ensure the agency is equipped with the skill sets required for successful implementation.

Establish an AI Governance Board

Federal agencies need to create a board comprising senior leadership roles and AI experts, chaired by the Deputy Secretary and vice-chaired by the Chief AI Officer (CAIO). The board is essential to ensure projects are aligned with ethical standards, federal frameworks like Executive Order 13960, and the directives of Executive Order 14110. The board should convene at least semi-annually and include representation from officials responsible for key enablers of AI adoption and risk management, including IT, cybersecurity, data, privacy, civil rights, and other areas. The CAIO supports the Deputy Secretary in coordinating AI activities and implementing executive orders. External experts should be consulted to broaden the board’s perspective, injecting technical, ethical, and sector-specific expertise.

The board must coordinate among officials overseeing the adoption and risk management of AI, managing algorithmic biases, data privacy, and security concerns. This coordination ensures seamless project integration across departments, aligning AI governance with existing agency risk management strategies (as outlined in OMB Circulars A-123 and A-130) and achieving compliance with Executive Order 14110.

The main focus in risk management should be:

  • Data Quality and Security: Oversee measures to ensure high data quality and security, managing biases, privacy concerns, and unintended consequences. Emphasize secure data management strategies aligned with federal policies, building trust in AI systems.
  • Societal Impact Assessment: Conduct societal impact assessments to understand AI’s implications on public welfare, ensuring transparency and accountability. Prioritize projects that contribute to the public good, address societal challenges (e.g., food insecurity, climate crisis), and advance equitable outcomes.

Agencies should improve their ability to adopt AI responsibly, using it to increase mission effectiveness while recognizing the limitations and risks of AI. The governance board should promote responsible sharing of AI resources across the federal government, encouraging joint efforts to scale responsible AI adoption and ensure the technology benefits society.

 

Conduct an Internal Audit

Identifying and evaluating existing AI tools is crucial, assessing their integration with operations for effectiveness and compliance with federal regulations and ethical standards. According to Executive Order 14110, this assessment must receive appropriate attention from the CAIO, who should participate in broader agency-wide risk management bodies and processes. The CAIO must ensure the agency’s AI coordination mechanisms align with its needs based on current and potential AI usage and associated risks. This assessment will enable agencies to align their internal AI efforts with their governance strategies.

A thorough gap analysis is equally crucial at this stage for identifying missing capabilities and improvement opportunities, highlighting strengths and weaknesses in AI adoption. This analysis aims to help agencies shape future strategies and identify key areas for growth. The governance board can convene relevant senior officials to address barriers and manage risks, providing a detailed picture for prioritizing impactful AI development.

Similarly, collaboration with the Office of Personnel Management (OPM) is required to develop occupational categories for AI roles and bridge existing skill gaps. Agencies should appoint an AI Talent Lead accountable for tracking AI hiring progress, reporting to senior leadership, and providing data to OPM and the Office of Management and Budget (OMB) on hiring needs. The AI Talent Task Force, created under Executive Order 14110, will assist these talent leads in enhancing hiring practices and sharing resources across agencies.

 

Organize AI Workshops

Organized AI workshops can foster a culture of cross-functional collaboration by uniting stakeholders and external experts. These workshops will provide a platform for sharing custom-developed code, models, and data in compliance with federal guidelines like the OPEN Government Data Act. They encourage the responsible sharing of these AI assets across government agencies, subject to applicable laws and considerations like national security, intellectual property, and individual privacy. For models and data that cannot be fully released, agencies should aim for partial sharing, ensuring that valuable information is made available where it can be safely done. By promoting transparency and reusability, these sessions facilitate discussions on best practices, AI trends, and ethical considerations.

Additionally, internal training is required to collaborate on interoperability standards that ensure data assets can be shared and reused efficiently. Agencies may use standardized formats to enhance data interoperability, helping prioritize the sharing of code and data that have significant potential for reuse. Procuring AI assets should involve strategies that allow the sharing of code and data assets, promoting the adoption of open-source practices.

Such planned workshops will help identify, prioritize, and implement use cases that are compliant with federal regulations and encourage collaboration across departments to share valuable insights.

Prioritize Use Cases

To prioritize AI use cases effectively, agencies must systematically assess each use case, focusing on strategic alignment, return on investment (ROI), and scalability. This evaluation needs to analyze technical requirements, data needs, project complexity, and the potential impact on employees to identify high-value, feasible initiatives. However, the GAO’s analysis revealed that only five agencies provided comprehensive data for each reported use case. The remaining 15 had instances of incomplete or inaccurate information, with inventories missing key elements like lifecycle stages or whether the AI use case was releasable. Some inventories even included projects mistakenly classified as AI. Inaccurate inventories hinder effective AI management, making this assessment crucial for addressing gaps.

Agencies are required to maintain annual AI use case inventories, submitting these to the Office of Management and Budget (OMB) and publicly posting them on their websites. These inventories must clearly identify safety-impacting and rights-impacting AI projects, report associated risks, and outline risk management strategies.

It is important to follow federal guidance and standards for accurate inventories while ensuring that a strong IT infrastructure supports AI training and inference. Key focus areas include:

  • Data Governance: Maintain practices that prioritize data quality and representativeness while minimizing bias.
  • Cybersecurity: Streamline processes to meet AI security needs and promote continuous authorization.
  • Generative AI: Explore beneficial uses while implementing oversight mechanisms to reduce risks.

Removing barriers to responsible AI use will ensure agencies maximize the benefits of their AI use cases while managing risks effectively.

Initiate AI Project Development

Establishing dedicated teams and partnerships with technology providers and research institutions is an important step in the initial development phase.  Once these foundational elements are in place, agencies can initiate pilot projects. These pilot projects allow the dedicated teams to apply their expertise in a controlled environment, enabling them to refine AI concepts and methodologies effectively before scaling up their efforts for broader deployment.

To manage risks effectively in the project development, agencies must implement practices to address potential impacts on public rights and safety. By December 1, 2024, agencies must align their AI practices with the minimum standards set forth in Section 5(c) of the memorandum, particularly for “safety-impacting” and “rights-impacting” AI. Any AI application that cannot meet these standards must be terminated. This compliance is essential to ensuring that AI projects are developed responsibly and that they align with ethical, regulatory, and safety requirements.

Agencies must review all current and planned AI projects to determine if they fall under the definitions of safety-impacting or rights-impacting AI. Such projects must adhere to minimum risk management practices, particularly when their output serves as a primary basis for decisions. If the CAIO, in collaboration with other relevant officials, finds a particular AI project doesn’t meet these definitions, they can adjust the determination based on a documented, context-specific risk assessment. The CAIO is responsible for tracking these assessments and revisiting them when significant changes occur, ensuring projects remain compliant.

Such an approach to managing risk, starting with pilot projects and following strict guidelines, enables agencies to utilize the power of AI while protecting public safety and rights. Partnerships with external organizations help refine the project, ensuring that it remains ethical, scalable, and beneficial for broader application across the agency.

 

AI Strategy Refinement and Implementation

This step requires iterative and adaptive methods for regularly updating strategies as technology evolves and new federal priorities emerge. Reviewing regulatory plans in collaboration with the  OMB and aligning them with the Regulation of AI Applications memorandum (M-21-06) ensures comprehensive compliance.

To achieve this, agencies should:

  • Test AI in Real-World Contexts: Conduct rigorous testing to verify that AI performs effectively under conditions similar to its intended environment. Testing should follow domain-specific best practices and incorporate feedback from operators, reviewers, and those affected by the AI system.
  • Independently Evaluate AI: An independent agency authority, such as the CAIO or oversight board, must review the AI’s impact assessment and real-world testing results. This ensures the AI works as intended and that benefits outweigh the risks.
  • Monitor AI Performance: Implement ongoing procedures to monitor AI for performance degradation or adverse impacts on rights and safety. Scale up new features incrementally and defend against AI-specific exploits.
  • Evaluate Risks Regularly: Conduct periodic human reviews at least annually and after significant modifications to ensure evolving risks are addressed. Confirm that minimum practices still mitigate existing risks or identify new response options.
  • Consult Communities and the Public: Incorporate feedback from affected communities and the public. For sensitive contexts like fraud prevention, representative groups can offer practical insights without compromising investigations.

Agencies should ensure AI aligns with the Constitution and complies with applicable laws on privacy, intellectual property, cybersecurity, and human rights. By refining their strategies iteratively, government agencies can adapt to technological shifts while maintaining alignment with legal standards and effectively managing AI-related risks.

 

In accordance with the executive order, agencies are mandated to navigate AI advancements with transparency, accountability, and non-discriminatory practices. As AI progresses, its integration into government functions poses both challenges and opportunities. These considerations help steer agencies through the implementation phases—discovery, development, execution, and review—ensuring responsible, effective, and ethical deployment of AI technologies.

Emphasizing mission-critical issues and judicial responsibility, agencies are guided to lead an AI-centric evolution that upholds technological proficiency and judicial integrity. Implementation is an ongoing process; as technology evolves, so too must our solutions and strategies to responsibly harness AI’s potential, continuously adapting to meet emerging needs and maintain the efficacy of public services.