Skip to main content Scroll Top

America’s AI Action Plan, Part 2: AI for the Public Good: What the Plan Means for States, Local Governments, and Nonprofits

AI is no longer theoretical for public service. In McKinsey’s 2024 survey, 65% of organizations reported regularly using generative AI, nearly double the year prior, momentum that’s now shaping state/local governments and nonprofit delivery models as well. 

While the White House’s America’s AI Action Plan (2025) centers on federal agencies, its emphasis on responsible, scalable AI should be read as a signal to all public-serving institutions, state, local, education (SLED), and nonprofits, to accelerate adoption with guardrails. The Plan’s companion OMB guidance on governance and procurement gives a concrete blueprint for getting there. 

Why SLED and Nonprofits Should Pay Attention

State and local agencies, school systems, and nonprofits face many of the same service pressures as federal programs: rising demand, limited staff, and high public expectations. The themes in America’s AI Action Plan (responsible AI, faster R&D, modern infrastructure, and a skilled workforce) translate directly to their missions in education, housing, workforce development, emergency management, and public health. Early adopters across the public sector already show how AI can improve access, speed, and equity.

  • Better front doors to services: Cities are already using AI to extend constituent services. Los Angeles runs a natural-language 311 AI chatbot, “CHIP,” to route requests through MyLA311, an early example of applied NLP in local government.

  • Multilingual, equitable access: San José built an AI inventory and is piloting real-time translation to make 311 and public meetings more accessible, useful patterns for any city with diverse language needs.

  • NGO impact at scale: Feeding America uses algorithms (e.g., MealConnect) to optimize food donations and routes, reducing waste and getting supplies where they’re needed faster.

Solving for Resource Constraints with Smart, Scalable Tech

Smaller agencies and nonprofits often face tight budgets, lean teams, and legacy systems. Even so, meaningful gains are within reach. By pairing AI with low-code platforms, pre-trained models, and modular digital tools, public-serving organizations can add capacity quickly, keep costs predictable, and avoid major infrastructure changes. The approach fits everyday missions, speeding case processing, improving service intake, and expanding language access, while building a foundation that can scale over time.

A practical path starts with small, low-risk pilots that sit on top of what already exists. Low-code apps can handle intake, status updates, or basic triage and connect to current case systems to reduce manual steps and shorten wait times. Pre-trained AI services for summarization, translation, redaction, and document classification can be consumed “as a service,” delivering immediate efficiency without hiring data-science teams or training custom models. Many public teams validate these tools in pre-vetted sandboxes before buying; the General Services Administration’s USAi.gov hub curates evaluation templates, model cards, and playbooks that help structure early tests and document results. While currently intended for federal agencies, a skilled industry partner can help apply best practices from initiatives like USAi.gov for the SLED community.

For organizations that need compute and data to prototype but lack internal infrastructure, the NAIRR pilot provides shared access to resources through the National Science Foundation, giving state universities, labs, and agencies a low-cost way to explore use cases.

Modularity keeps costs and risks in check. Instead of replacing core systems, agencies can add small services, such as translation or records-redaction, through simple interfaces and scale one use case at a time. Where possible, the same vetted service can be shared across multiple programs or neighboring jurisdictions to avoid duplication and concentrate on quality assurance. States and cities are also publishing patterns others can reuse, reducing policy and procurement friction. Examples include California’s GenAI Guidelines for practical, low-risk pilots.

How to Lay the Groundwork for Responsible AI

A durable AI program is less a single deployment and more a repeatable way of working. When executive, legal, security, procurement, and program teams share a common cadence, clarifying mission value, testing in controlled settings, measuring outcomes, and expanding only when risks are understood, AI becomes a steady source of operational gains rather than a one-off experiment. 

Such an approach protects equity and privacy, clarifies accountability, and sustains public trust through transparent decisions and defensible documentation. It also reduces delivery risk by making roles, checkpoints, and escalation paths explicit from the outset. Over time, the organization benefits from reusable templates, lightweight reviews, and role-based training that travel from one use case to the next, lowering both cost and time to value. Partnerships with universities and mission-aligned nonprofits fit naturally into this model, contributing evaluation expertise, user research, and workforce development while leaving decision rights with the public entity. The outcome is measured progress, small releases that solve real problems and scale predictably under strong governance.

The steps below establish a practical foundation for responsible AI.

1. Governance and Oversight

Responsible AI begins with strong governance. Agencies should designate an accountable AI lead, establish a cross-functional review group, and clearly define decision rights across program, legal, privacy, procurement, and IT teams. The foundation is reinforced by creating a comprehensive AI inventory aligned with OMB M-25-21, capturing the purpose, data sources, human oversight, evaluation cadence, and privacy controls for each tool. Equally important is community engagement, bringing in service recipients, advocates, and frontline staff to test usability, review accessibility, and surface real-world needs. Finally, oversight must extend to people: role-based training ensures staff are equipped to handle AI responsibly, from program leads to frontline teams.

2. Responsible Design and Risk Management

AI systems must be designed with safeguards from the start. Agencies should map mission value to measurable success criteria, selecting high-value, low-risk use cases and publishing standards for accuracy, timeliness, accessibility, and fairness before projects begin. Lightweight risk audits, guided by NIST’s AI RMF, help identify potential harms, set acceptable performance ranges, and define points for human intervention. Safe technical baselines, such as piloting in compliant cloud environments, enforcing least-privilege access, logging activities, and retaining model interactions, create a secure environment for testing and continuous improvement.

3. Prototyping and Scaling Responsibly

Rather than moving directly to production, begin with small, representative pilots that test standards for accuracy, reliability, accessibility, and equity. Scaling should only occur once these pilots demonstrate consistent results. Documentation and disclosure play a critical role in this stage: maintaining model and evaluation cards, change logs, user guidance, and plain-language summaries builds transparency, strengthens oversight, and sustains public trust throughout deployment.

4. Partnerships, Procurement, and Outcomes

Long-term sustainability depends on thoughtful procurement and collaboration. Structure contracts with short pilot periods, clear exit ramps, and requirements for explainability, auditability, and performance to avoid vendor lock-in. Measuring outcomes, such as turnaround times, backlog reduction, error rates, language-access improvements, and user satisfaction, helps identify which approaches deliver the greatest mission value. Finally, partnerships with universities, civic-tech groups, and peer agencies can provide evaluation support, shared components, and workforce development, provided that data-sharing follows clear and responsible rules.

Core Practices for Responsible AI - Proven in the Field

With the strategic direction set, the next phase focuses on everyday practices that make responsible AI stick. State agencies, city departments, school districts, and nonprofits can turn policy into results by standardizing a few core habits—how systems are cataloged, how communities are engaged, how risk is checked early, how pilots are run safely, and how decisions are documented. The building blocks below translate high-level guidance into routine delivery and help pilots grow into stable, trusted services:

  1. Shared literacy and system inventory: Many public organizations are cataloging AI and AI-adjacent tools with ownership, data sources, and human oversight. An example is New York City’s citywide framework.
  2. Co-design with impacted communities: Human-centered service design reduces drop-offs and improves equity in eligibility and benefits flows; national projects demonstrate this pattern in practice.
  3. Safe prototyping before scale: Pilot work in compliant cloud environments with clear access controls and audit trails supports measured expansion once accuracy, reliability, accessibility, and fairness meet the stated bar.
  4. Documentation and transparency: Model and evaluation cards, change logs, and public summaries help audits and leadership transitions, especially in health, housing, and education services. 

Building on these steps, county governments are sharing practical roadmaps for peers, such as NACo’s AI County Compass, and cities are formalizing AI policies and standards, such as Austin’s AI policy, so pilots mature into stable, reusable services rather than one-off experiments. 

How TechSur Solutions Helps SLED and Nonprofits Move from Vision to Value

TechSur Solutions specializes in helping public-sector teams turn policy guardrails into living, effective delivery practices, without overcomplicating the process. Our approach aligns with the Plan’s pillars and the realities of constrained teams:

  • AI Implementation & Adoption Playbook: We translate OMB M-25-21 and NIST AI RMF into templates, checklists, and workflows your staff can run – covering use-case selection, risk reviews, eval design, human-oversight procedures, and model change-management.
  • Low-Code + AI PoCs: We partner with leading low-code providers to deliver minimum viable products in weeks. 
  • Operational modernization: We design reference architectures for safe deployment (identity, data pipelines, model gateways, audit logging), then help your team reuse them across programs.
  • Measurable outcomes: In a DHS immigration workflow, TechSur’s AI-based anomaly detection cut processing delays by 60%, demonstrating how targeted AI can unlock mission capacity. 

Conclusion: Turning Policy into Mission Outcomes

For SLED and nonprofits, the message in America’s AI Action Plan is clear: start small, prove value, and scale within strong governance. States and cities are already doing it, deploying chatbots to meet residents where they are, translating in real time to broaden access, and codifying AI policies that survive leadership changes. NGOs are using AI to move resources faster and triage limited human attention to the highest-risk cases. The public is watching, and trust hinges on transparency and accountability.

If you’re ready to move from intent to implementation, contact TechSur Solutions to stand up your first pilots, evaluation framework, and acquisition package, all aligned to federal guidance and designed for resource-constrained teams.