The rush to understand new socio-economic contexts created by the wide adoption of AI is justified by its far-ranging consequences, spanning almost every walk of life.Yet, the public sector’s predicament is a tragic double bind: its obligations to protect citizens from potential algorithmic harms are at odds with the temptation to increase its own efficiency - or in other words - to govern algorithms, while governing by algorithms.
Whether such dual role is even possible, has been a matter of debate, the challenge stemming from algorithms’ intrinsic properties, that make them distinct from other digital solutions, long embraced by the governments, create externalities that rule-based programming lacks. As the pressures to deploy automated decision making systems in the public sector become prevalent, this paper aims to examine how the use of AI in the public sector in relation to existing data governance regimes and national regulatory practices can be intensifying existing power asymmetries.To this end,investigating
- the legal and policy instruments associated with the use of AI for strenghtening the immigration process control system in Canada;
- “optimising” the employment services” in Poland, and
- personalising the digital service experience in Finland,
the paper advocates for the need of a common framework to evaluate the potential impact of the use of AI in the public sector.
In this regard, it discusses the specific effects of automated decision support systems on public services and the growing expectations for governments to play a more prevalent role in the digital society and to ensure that the potential of technology is harnessed, while negative effects are controlled and possibly avoided.
This is of particular importance in light of the current COVID-19 emergency crisis where AI and the underpinning regulatory framework of data ecosystems, have become crucial policy issues as more and more innovations are based on large scale data collections from digital devices, and the real-time accessibility of information and services, contact and relationships between institutions and citizens could strengthen – or undermine - trust in governance systems and democracy
Maciej Kuziemski (a), Gianluca Misuraca (b)
- (a) Berkman Klein Center for Internet and Society, Harvard University, Cambridge, MA, USA
- (b) European Commission, Joint Research Centre, Digital Economy Unit, Seville, Spain
This paper provides an analysis of the legal implications of the use of AI in Government; an investigation of complementarities in data and AI governance processes as well as an assessment of AI methods best supporting trust and strengthening legitimacy of Government. It included the analysis of four case studies of AI regulatory governance frameworks already in place or being developed.
The results of the study served as input to the JRC Science for Policy Report “Overview of the use and impact of AI in public services” (Misuraca G., and van Noordt, C)