New Study Reveals AI Enthusiasm Amidst Poor Preparation

New study shows strong AI enthusiasm in public sector despite low preparedness, highlighting opportunities and challenges ahead.

Show summary Hide summary

Across many city halls today, staff are testing AI tools on their lunch breaks while policies, training, and data systems lag far behind. That gap between curiosity and capacity is exactly what a new study reveals on AI Enthusiasm in the Public Sector brings into sharp focus.

The research, based on finance and operations leaders, shows strong interest but “low preparedness” for Artificial Intelligence in government. While 57% of respondents say they are exploring AI, only 16% are running pilots and less than 2% use it widely across departments. For residents dealing with delayed permits or slow benefits processing, that gap is not abstract; it shapes everyday experiences of the city.

Government AI enthusiasm meets fragile readiness

The survey captures a moment where Government AI feels both exciting and fragile. Leaders are drawn to the promise of faster workflows and better data, yet they operate inside risk-averse institutions, bound by public trust and tight budgets. Privacy and security worries top the list of barriers, cited by 57% of those surveyed, ahead of issues like outdated systems and limited staff time.

How One City Boosted Bus Speeds by 20% with AI
Kansas City Missouri Leads with New Data Center Ordinance

Other global studies echo this tension. One Google public sector analysis finds nearly 90% of US federal agencies already use some form of AI, yet detailed AI Readiness frameworks remain patchy. Another index shows that while more than 70% of public servants worldwide now touch AI tools, only a minority believe their governments use them effectively. The pattern is the same: enthusiasm outpacing preparation.

new study reveals
new study reveals

Urban innovation: cities pushing ahead on AI

Despite these constraints, some cities treat Technology Adoption as part of their core infrastructure, not a side experiment. San Francisco, for instance, has opened approved AI tools to about 30,000 employees for tasks like summarising memos and analysing datasets. For a city of over 800,000 residents, tiny time savings at that scale can quickly turn into more responsive services.

Smaller US cities such as Bellingham and Everett in Washington state use AI to draft internal documents, support research and inform internal policy design. Their approach is modest, but it shows how even mid-sized administrations can trial Public Policy-oriented AI, provided there is guidance on data protection and worker safeguards. These local experiments give residents a glimpse of what lower-friction government interactions could feel like.

How public sector AI actually works on the ground

Behind the buzzwords, the study reveals very concrete ambitions. Around 68% of surveyed leaders hope AI will bring productivity gains and time savings. Many frame success in human terms, not just technical metrics: 40% say they will judge projects by staff hours saved, while others highlight fewer compliance errors, quicker responses or reduced backlogs.

Use cases cluster around four areas: procurement and project generation, forecasting and scenario planning, grant research and matching, and routine document processing. These are not the headline-grabbing “smart city” visions of autonomous fleets. They are the unseen workflows that determine whether a housing grant reaches a family in weeks or months, or whether a local climate project secures the funding it needs before deadlines close.

From friction to flow: where AI helps residents

To make this tangible, imagine Maya, a resident applying for a housing retrofit subsidy in a mid-sized European city. Today, her application might pass through several offices, with staff copying data between systems. With carefully introduced AI, much of that repetitive checking and sorting can happen in seconds, while staff focus on complex cases and resident support.

Other sectors show a similar pattern. Healthcare pilots, such as projects on AI-powered mammography, use automation to amplify human expertise rather than replace it. In government, the equivalent would be caseworkers spending less time digging through files and more time on direct contact with vulnerable residents. The payoff is not only efficiency but also a more humane experience of public services.

What the new study reveals about obstacles and risks

Alongside optimism, the study underlines hard constraints. Respondents point to four main obstacles: privacy and security concerns, unclear state and federal guidance, legacy technology that cannot integrate AI tools, and a shortage of resources to run new initiatives. Each barrier has a human face. When data systems are fragmented, chatbots fail to answer basic questions. When privacy rules are unclear, leaders stall projects that might have streamlined services.

Research on public workers, such as the analysis available through the Roosevelt Institute, warns that poorly designed automation can overload staff and harm service quality. The survey’s authors acknowledge this risk, stressing that AI only works when it removes friction, not when it shifts bureaucratic weight from one group of people to another. For city dwellers, that distinction can mean the difference between faster permits and longer queues.

Stakeholders shaping AI readiness in cities

Transforming AI Readiness from slogan to reality involves several groups. Elected officials set the political tone, balancing innovation with accountability. Chief information and data officers handle infrastructure and security. Unions and frontline workers test whether tools fit real workflows. Residents, advocacy groups, and local media scrutinise impacts on privacy, bias, and access.

Internationally, organisations such as the European Commission’s AI Watch and major consultancies provide frameworks and training for the Public Sector. Large tech providers, from cloud platforms to start-ups, pitch new solutions and sometimes co-fund pilots. Yet without clear rules and investment, even generous offers, such as generative AI support programmes, risk remaining pilot projects rather than citywide utilities.

Scaling government AI: from pilots to daily life

For AI to move from scattered experiments to everyday infrastructure, cities need a playbook. That usually includes clear data governance, procurement rules designed for algorithmic tools, workforce training, and resident-facing safeguards such as appeal mechanisms for automated decisions. Without this scaffolding, the study suggests, most administrations will stay stuck at the “exploring” stage.

Some governments have begun to publish AI strategies that treat algorithms as part of long-term infrastructure investment, alongside roads or transit. Surveys like the Public Sector AI Adoption Index show that countries with clearer guidance see higher confidence among public servants. Where guardrails and training exist, staff tend to describe AI not as a threat, but as something empowering.

What this AI enthusiasm means for city residents

For people living in dense urban areas, the stakes are high. AI could help match residents to social services faster, reroute maintenance crews to potholes before complaints spike, or model flood risks under a changing climate. It might even free up budget to invest in issues like clean energy or cleaner transport fleets, connected to broader environmental debates from meat consumption to coal regulation found in wider sustainability reporting such as food system rethink pieces.

Yet the same tools, if rushed or poorly governed, could deepen inequalities: automated systems that misinterpret non-standard life situations, digital channels that leave people without internet access behind, or data practices that erode trust. The study’s central message is that Low Preparedness is not simply a technical problem. It is a social question about who benefits from Artificial Intelligence in our cities, and who gets left waiting at the end of the service line.

  • Residents gain when AI cuts waiting times and makes services clearer.
  • Public workers benefit when tools reduce repetitive tasks, not autonomy.
  • City leaders need frameworks that balance innovation, equity and trust.
  • Tech providers must adapt products to public values, not just speed.

Why are governments so enthusiastic about AI despite low readiness?

Many public sector leaders see artificial intelligence as a way to reduce backlogs, speed up decisions and stretch limited budgets. Even if their data systems and policies are not fully prepared, they do not want to miss opportunities to improve services for residents. This creates a gap where AI Enthusiasm runs ahead of concrete AI Readiness.

How does public sector AI impact everyday city services?

Government AI often targets back-office tasks such as document review, forecasting, and grant matching. When these processes run faster and with fewer errors, residents can experience shorter queues, quicker permits, and more predictable responses from public agencies. The impact is felt in daily interactions with housing, transport, and social services.

What are the main risks of rapid AI adoption in government?

Key risks include breaches of privacy, biased or opaque algorithmic decisions, and additional workload for staff if tools do not fit real workflows. Without clear public policy, strong oversight and community input, AI deployments can erode trust and widen existing inequalities in access to services.

Which stakeholders should shape rules for government AI?

Elected officials, digital and data leaders, unions, frontline public servants, residents and civil society groups all have a role. Technology vendors contribute expertise, but decisions about how AI affects rights, fairness and access need strong democratic oversight, not just technical judgement.

What should city dwellers look for as AI use grows in their area?

NYC Pledges $1 Billion to Expand Automated Traffic Safety Enforcement Program
Are Drivers Ready to Embrace Speed-Limiting Technologies?

Residents can watch for clear explanations of where AI is used, options to appeal automated decisions, and opportunities to give feedback on new tools. Strong interest from local leaders is positive when paired with transparency, safeguards and a focus on making services more human, not less.

Give your feedback

Be the first to rate this post
or leave a detailed review


Like this post? Share it!


Leave a review

Leave a review