The Alan Turing Institute estimates that 41% of public sector time could be AI-supported, and the UK Government’s AI strategy intends to ‘transform our public services with AI’. There is no ambiguity about the direction that public services are expected to take. AI is a matter of if, not when, and encouragement to adopt it will likely become requirement to do so.
Some will be nervous, some enthusiastic, and some a mixture of both. AI absolutely does have transformational potential for the bodies, organisations, and departments that embrace it, but some caution is wise. For AI to deliver the outcomes that the public sector is hoping for and expecting, the data foundations must be secure. If they are not, AI will be at best counterproductive, and at worst disastrous. If they are, then AI can be healthy, productive, and even revolutionary for the public sector.
Public sector data and its relationship with AI
Data is the input and the output of AI. Not only that, AI amplifies what it consumes. So, a small gap in training data becomes a gaping hole in the results. An unnoticed bias in the dataset becomes flagrant discrimination. AI simply pursues what it thinks is correct, and it takes what it is fed to be ‘the truth’. If the entire foundation of how it thinks is somehow incorrect, then it will reproduce and amplify those errors. Here are the risks, and how to prevent them.
AI mistrust and disappointment, and how to prevent them
Public bodies face frequent calls and mandates for greater efficiency, and AI genuinely does represent a golden opportunity to achieve that. However, organisations and teams’ relationship with AI can undermine that, leaving no improvement in productivity and simply creating resentment. For example, if a team member is promised that AI will save them from some of their most boring tasks, they will likely be relieved. However, if their manager is nervous about AI’s work and instructs the colleague to check all of the output rather than trust it, then the inefficiency is moved elsewhere, rather than eliminated. The team member who expected to be rid of the monotonous work is still stuck with it, and comes to view AI as a broken promise rather than a useful assistant.
The solution is cleanliness, control, and confidence in data.
If every organisation knows where their data is, who can access it, and how it should be used responsibly, then they can be sure that AI is fed only what it is meant to receive, by understanding the data that the team has and controlling what they input. With that, AI’s output becomes dependable, trust in it grows, and its use becomes fluent and productive.
Preventing AI bias in the public sector
Any organisation will wish to prevent bias (and any resulting unwanted discrimination) from affecting their AI, the public sector might feel particular pressure to avoid it.
There have been major examples from the public and private sector of disastrous AI bias. Take the case of the Notting Hill Carnival and facial recognition cameras that delivered false positives 98% of the time. Or, the recruitment AI for software engineers that was inadvertently taught that it should reject all female applicants. Bias can come as much from absent data as it can from superfluous data, so the guiding principle must be to train AI with exactly the amount and exactly the kind of data that it needs — no more and no less. It is also vital to know and control where all of AI’s training data is coming from and how it is being fed to AI, so that the organisation can prevent existing bias in data sets from becoming the bias of an AI model.
Maximising data for public good with AI readiness
AI transformation starts with data foundations. Before exploring AI’s potential, you must understand your data: where it is, who has access to it, and how it can be used responsibly. Creating robust systems for data governance and cataloguing helps to ensure regulatory compliance and to uphold ethical standards. Agile supports public bodies in building AI readiness through practical automation, effective data governance, and comprehensive data cataloguing. Our approach prioritises AI preparedness over urgency, helping organisations take confident, considered steps toward digital transformation.
With the right foundations in place, you can make better decisions, improve service delivery, and use data for public good.
To understand how to ensure your data’s accuracy, accessibility, safety and reliability read our guide to Citizen Master Data Management.