AI in Public Administration
Hype, Risk or Game-changer?
Published on 22/03/2024
Artificial intelligence (AI) is a hot topic these days, especially since the launch of the AI chatbot ChatGPT at the end of 2022. The hype around it is fuelling hopes that machines will be able to perform routine tasks and complex research in the future. The topic of AI is nothing new in administration, but a lot of uncertainty still remains. What exactly do we mean when we talk about artificial intelligence? Which applications are suitable for public authorities? How can they change the way we work? And what do public administration bodies need to do to truly harness the potential of AI?
How AI is making its way into administration
The breakthrough of AI happened seemingly overnight. The release of ChatGPT triggered a real boom that extended far beyond the borders of the IT world. Even though developments in the field of artificial intelligence are progressing rapidly, the concept of AI is nothing new. In fact, scientists have been researching it since the late 1950s. And even the public sector discovered AI several years ago. Strategies regarding AI in administration now exist at both federal and state level. Many AI applications are already being developed and tested as prototypes in public authorities. But before we evaluate the possibilities offered by AI in administration, let's first take a brief look at what this megatrend is all about.
The third wave of artificial intelligence
The ability to learn, plan and make rational decisions no longer seems to be reserved for humans alone. The current generation of AI applications is already part of the third wave. This wave is characterised by the ability of programs not only to analyse data and recognise patterns, but also to learn and adapt decision-making processes themselves. The systems can automate tasks and imitate human abilities such as speech recognition, image analysis or even creative thinking.
Different types of artificial intelligence
Terms such as machine learning, neural networks or natural language processing are often used in connection with AI. They are often used interchangeably, but there are different technologies behind them. These different approaches are often combined to create powerful, versatile AI systems.
Machine learning
This term refers to the learning process of an AI system which is based on data analysis. AI can use data as a basis to recognise patterns and draw conclusions from them. There are different types of machine learning, such as supervised learning, unsupervised learning and reinforcement learning. In deep learning, AI processes and analyses particularly large amounts of data. Unlike traditional machine learning, deep learning models can learn independently.
Neural networks
Deep learning models use neural networks which, in turn, are a special type of machine learning. The mathematical models imitate the structure of the human brain and consist of interconnected artificial neurons. They are able to process and assess information. Neural networks are particularly good at recognising complex patterns in large amounts of data.
Language model
A language model is based on natural language processing technology. The purpose of a language model is to understand and react to human language. Language models are used in text recognition and translation technologies. The most prominent example of this is Generative Pre-trained Transformer (GPT), which forms the basis for what is probably the most famous chatbot in the world. Due to its mass of parameters and training data, the GPT model is considered a Large Language Model (LLM). The current version, GPT-4, is also characterised by multimodality. This means that, in addition to text, it can process other types of data such as images, videos and audio material. The German start-up Aleph Alpha, known for its AI applications in public administration, uses various multimodal LLMs.
Generative AI
These are AI models that can independently generate new content such as texts, images, audio material or videos. Generative AI does not stand in contrast to machine learning, neural networks and language models, but rather, uses these technologies to imitate humans. Examples of generative AI applications include ChatGPT, Aleph Alpha's chatbot Luminous, and image generators such as Midjourney or DALL-E.
What is needed to integrate AI into administration?
The potential is huge, and several practical applications already exist. However, there is still a lot to do before AI technologies become established in everyday administration. On the one hand, in technical terms, ministries and offices must provide the requisite infrastructure, develop databases and set up AI systems in accordance with strict regulations. On the other hand, individuals must be willing and able to work with the new technologies.
The image shows a man and a woman working together at a computer.
A question of organisation: How AI is making its way into the public sector
Technological progress requires people to change, too. Administrative employees not only need to be made aware of AI solutions – they also require specific digital skills to apply and use them. Employees need to become accustomed to using AI as part of their regular workflow. According to a study conducted by the Bertelsmann Stiftung in 2023, this requires far more than just IT expertise. For example, its pamphlet “Orientation in the Skills Jungle” lists seven types of skills that need to be honed in order to successfully implement artificial intelligence in public authorities. These include a technical understanding of AI systems, organisational and communication skills, as well as an awareness of social and ethical aspects.
Step one for AI in public administration: Data collection
In order to put AI into practice, authorities must first collect data of sufficient quality to train the AI. Although Germany’s administrative bodies have masses of data at their disposal, far too much of it has not been digitised yet. Or rather, what has been digitised is rarely suitably prepared for structured digital access. The advantage gained is mediocre, at best. The simplest solution? OCR (Optical Character Recognition). OCR capture systems recognise scanned text and automatically convert it into readable text. This process, known as information extraction or data extraction, also categorises the information that is read out. This means that, in addition to text, the OCR software also provides context, so it can distinguish what the information refers to.
Step two: Finding data and making it accessible
Internal data collection is only the first step towards feeding artificial intelligence with training data. AI needs a veritable treasure trove of training data. which can usually be sourced from several locations. This applies in particular to federal administration, where a policy area or topic is distributed across various sources. The data inventories are equally confusing and unstructured. Finding information currently requires time-consuming, manual research. Bundesdruckerei GmbH developed a guide on behalf of the Federal Ministry of Finance: as part of the Data Atlas project, a prototype was created to provide an overview of the financial administration's databases. It uses a metadata catalogue to show employees where data on a specific topic can be found, what type of data it is, the legal basis for its collection and who the contact person is.
Step three: Preparing data for AI
A solution such as the Data Atlas may make data findable, but that data is rarely in a format that makes it immediately usable for AI systems. Artificial intelligence can only be as accurate as the data you feed it with. If the data is not of high quality, this will be reflected in the outcome: “garbage in, garbage out”. Data must be prepared to ensure adequate quality. Above all, this means checking it for completeness and consistency or cleaning it up, for example to identify duplicates or incorrect data. Preparing data also means augmenting it, combining it, grouping it and more. This lengthy to-do list shows that the process takes a lot of time and expertise. And especially during a shortage of skilled workers, administrative bodies lack the personnel to carry out the processing themselves.
Step four: Using data – how AI is taking over in federal administration
And this premise also applies to the use of data in AI projects. Here too, administrative bodies can only act independently if they have their own solutions. This is precisely why the Platform Analysis and Information System (PLAIN) project has been in place since 2023. With PLAIN, federal administration has a central platform for AI-supported data analysis. Departments can evaluate and visualise huge data sets in individual applications. This should ultimately benefit political decision-making. The platform, which provides the ministries with software, platform, and infrastructure as a service (SaaS, PaaS and IaaS), is backed by the Federal Foreign Office’s service provider for IT abroad – Auslands-IT – and Bundesdruckerei GmbH.
AI in the strategies of the federal government
The service and support offerings show that a lot is happening right now in terms of artificial intelligence, especially at federal level. The German government committed to this years ago. As part of its AI Strategy, Germany has been investing in the research, development and application of artificial intelligence since 2018. Under the heading “AI in public administration”, the document lists possible areas of application, for example in defence against cyber threats, in security authorities, in civil protection, or in sustainability policy. The AI and Big Data Application Lab, which is based at the Federal Environment Agency, has already been put into operation.
The image shows the Chancellery in Berlin.
In 2023, the German government decided to enhance their data strategy with the aim of optimising the data base required for AI systems. Data should be available to AI applications in better quality and on a larger scale. Especially with regard to AI in administration, the approach taken is described as follows: “We are examining whether and to what extent LLMs should be used sensibly in the public sector while ensuring compliance with data protection regulations.” For example, it should become easier to use unstructured data for LLMs and break down data silos within administration. Privacy-enhancing technologies (PET) could be used to preserve data privacy.
Regulation, data protection and data ethics
With its reference to data protection and PET, the German government’s data strategy also sheds light on the much-cited elephant in the room. For all its benefits, artificial intelligence has many people worried. Critical voices warn of an AI that is not subject to any control. Democratic values and transparency must always be guaranteed. It is not without reason that Richter advocates for considered use of AI – wherever there is room for discretion, the use of AI is currently ruled out. Without question, there needs to be enough room for innovation to promote the positive development of AI. However, we also need rules that contain threats and protect civil rights. This is particularly important for AI within administration.
GDPR and artificial intelligence
Such rules are provided not least by the GDPR – namely when personal data is used. Personal data is any data that could be used to draw conclusions about natural persons. This could be an issue in administration, especially when training AI. Article 6 (1) GDPR regulates when exactly an organisation may process personal data. It seems utopian that the data subject could explicitly consent (point (a) of the paragraph) to providing personal information for AI training purposes. Points (c) and (f) of Article 6 (1) would be more likely to apply. Accordingly, the AI would be necessary for the authority to fulfill its legal obligation (c) or to act in line with a legitimate interest (f) that outweighs the interests, fundamental freedoms or fundamental rights of the data subjects. It sounds complicated, and it is – so it’s obviously a case for the data protection officers, and possibly for data trustees. As a neutral intermediary between data providers and data users, it can pseudonymise or anonymise anything that points towards specific identities. However, if citizens use artificial intelligence offered by the administration that processes personal data, the classic consent rule applies.
Article 22 GDPR is also relevant with regard to the use of AI. According to this, a natural person has “the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” What’s special in Article 22 is that data protection almost merges into the related field of data ethics.
The EU AI Act – highly relevant for AI in administrative contexts
Setting limits on the use of AI is also an important aim of the European Union’s AI Act. With the most comprehensive law of its kind to date, the EU is attempting to regulate artificial intelligence, steer it in a safe direction and promote innovation. Essentially, the regulation assesses AI applications according to their risk of jeopardising the safety and rights of EU citizens. To this end, it establishes four risk levels. The higher the risk, the stricter the obligations that AI applications must fulfil. Systems with “unacceptable risk” are generally prohibited. This includes social scoring, which ties rights to desired behaviour. Real-time biometric facial recognition in public spaces is also prohibited, with a few exceptions.
AI systems with “high risk” do not pose a threat per se. However, because they are used in sensitive areas such as law enforcement and critical infrastructures, they have to meet strict requirements. AI applications that decide on the approval of state benefits also fall into this category. However, chatbots tend to pose a “limited risk”, while spam filters and AI-supported computer games are considered AI systems with “minimal risk”. In addition, the regulation stipulates that AI-generated or AI-manipulated content such as images, videos, or texts must generally be labelled as such.
AI in public administration: also a question of ethical principles
The AI Act attempts to curb the dangers posed by AI while still seeking to promote its use. After all, the added value for citizens can be immense – especially if the AI allows administrations to act more efficiently. However, real social benefits will only arise if AI systems take certain values into account. They must not discriminate against anyone, for example because certain groups are underrepresented in the training data. They must be explainable so that it remains clear how results are achieved. Security against cyber attacks and the ability for people to intervene at any time are equally important.
The Bundesdruckerei Group follows four specific principles in its generative AI projects:
- Factual accuracy: AI systems are not allowed to invent or interpret answers.
- Explainability: There needs to be full transparency around data sources and the legal basis for responses.
- Data sovereignty: Administrative bodies must retain control over all data used throughout the entire data cycle.
- Independence: The AI systems must not lead to dependencies, especially not on non-European companies.
AI in public authorities: Moderation to ensure success
Provided that appropriate values are established, the necessary database is accessible, and employees are adequately trained, the deployment of AI applications should result in a genuine win-win scenario within the administration. If artificial intelligence takes over routine tasks, government employees will have more time for what is most important: citizens. And AI-based data analysis and decision support systems benefit society by promoting targeted policies. However, targeted policies are also required when it comes to planning AI projects in public authorities. This calls for coordinated, overarching cooperation between the ministries and the creation of synergies. With BeKI, KI-KC and PLAIN, Germany’s highest administrative level demonstrates that it has understood this requirement.