Woman wearing headphones looks intently at her laptop.

Risks of artificial intelligence

How public authorities can use AI responsibly: identify risks, protect data, involve people, ensure transparency.

Explained in brief 

  • The biggest risk is bias: Faulty data or blind trust in models can result in wrong decisions being made. Human oversight, documentation, and monitoring are therefore mandatory.
  • Law and governance set boundaries: Artificial intelligence must not fully replace administrative decisions. The GDPR, NIS2, and the EU AI Act regulate data protection, security, and obligations for high-risk AI.
  • Transparency builds trust: Citizens must be able to understand how decisions are made. The EU AI Act requires public authorities to disclose and document AI usage.

Where are the greatest error risks in AI systems?

AI-supported systems are increasingly used in administrative decision-making, for example in grant applications, risk analyses, or case processing. Errors arise particularly where data is incomplete, out of date, or unbalanced, and where staff follow automated recommendations without critical review. 

Two main risks:

Pictogram Data Competitive advantage

Bias in data and models 

Biased training data perpetuates disadvantages. Mitigation includes careful data maintenance, fairness testing, and documented impact assessments.

Pictogram of three people

Automation bias in teams 

When scores are read as the truth, systematic misjudgements can creep in. Human-in-the-loop, four-eyes principles, and the obligation to justify deviations ensure human control.

Real-life example: Mismanagement in the UK social welfare administration

In 2024, an algorithm at the UK Department for Work and Pensions falsely identified many ordinary citizens as suspicious. This led to thousands of unnecessary checks and a significant loss of trust. The example shows the importance of continuous monitoring, quality control, and public accountability when using AI systems.

How do the GDPR, NIS2, and the EU AI Act protect against the risks of AI?

AI-supported systems create risks regarding data protection, data quality, and IT security. Insecure interfaces, inadequately protected cloud services, or poorly documented models can enable manipulation, data loss, or unauthorised linkages, resulting in serious consequences for the state and for citizens. 

Binding regulatory and governance frameworks are already in place.

Decisions with a significant impact must not be fully automated. Public authorities must allow human evaluation and grant individuals the right to access information, submit a formal response, and appeal decisions

Security and reporting requirements are being expanded. Relevant measures include hardened interfaces, access controls, logging, and supply-chain checks, especially for cloud and platform services.

For high-risk applications in public administration, tiered requirements regarding data quality, documentation, transparency, governance, and oversight are being introduced (not yet fully in force).

In practice, this means: Processes for data and model governance must be established and responsibilities clearly assigned, from collection and processing through to storage and deletion.

How can public administrations remain transparent when using AI?

Many AI models, especially in machine learning, effectively function as a black box. In many cases, decisions cannot be fully explained or understood. For public administration, this is a problem because citizens are entitled to understandable, verifiable, and contestable decisions. A lack of explainability and auditability undermines trust in public institutions. 

What public authorities should consider: 

  • Documenting and regularly reviewing decision-making processes, including data and role logs
  • Creating model cards outlining the purpose, training data, limitations, and versions transparently
  • Implementing technical audit mechanisms, reviewing routines for models, and clear version control
  • Establishing error and complaints procedures with low-threshold access for those affected 

Transparent AI systems are essential for maintaining control and safeguarding citizens’ rights.

How to ensure safe and fair AI operation

Safe and fair AI combines clear responsibilities, technical robustness, quality control, and well-trained staff.

Piktogramm Government

Create clear governance structures

Roles, responsibilities, and approval processes should be defined so that accountability remains clear and does not disappear “into the system”.

Pictogram sign with hook

Strengthen security and resilience

Public authorities should secure their AI systems technically, including through hardened interfaces (APIs), detailed access controls, comprehensive logging, and regular supply chain checks.

Pictogram Scale

Ensure quality and fairness

Datasets should be maintained continuously, models reviewed regularly, and drift effects identified. Before going live, a trial run in what’s known as shadow mode is recommended to identify risks early on.

Piktogramm Team

Enable and involve staff

Humans remain an indispensable control element. Training on typical errors (e.g. automation bias), mandatory guidelines for overriding AI recommendations, and a documented four-eyes principle strengthen accountability and trust.

The core principles of this governance are also reflected in the article on AI frameworks for public administration.

How can the life cycle risks of AI models be managed?

AI systems are not one-off projects. They go through a full lifecycle, from development and deployment through to adaptation or decommissioning. New risks can arise at every stage: faulty training data, unclear responsibilities, data drift, or a lack of monitoring. Managing these risks systematically preserves quality, fairness, and security in operation.

From the idea to recertification

  1. Problem definition: The purpose, those affected, and the legal basis should be clear from the outset.
  2. Shadow mode (test phase): Before real-world use, the model should run in the background. This makes it possible to check whether unintentional biases occur.
  3. Go-live with KPIs: At launch, clear indicators must be defined: What error rate is acceptable? How is fairness measured? Who is responsible for monitoring?
  4. Drift monitoring and alerts: A good system automatically detects when data structures or results shift, and triggers appropriate alerts.
  5. Retraining and recertification: If conditions or datasets change, the model must be reassessed and approved again.
Re-review

Changes to data, features, or code generally trigger a re-review, even if they come from the supplier or an external provider.

How does human-in-the-loop methodology reduce operational errors?

Even with carefully trained models, human judgement remains indispensable. The human-in-the-loop approach ensures that people remain actively involved in AI-supported processes. Not to check every calculation manually, but to establish defined intervention and review points. This ensures that accountability, traceability, and legality are maintained. 

Three key rules for practice

Pictogram number 1

Check implausible reasoning 

Staff must not follow recommendations without clear and understandable reasoning.

Pictogram number 2

Identify out-of-scope cases 

In exceptional cases, under new legal frameworks, or in unfamiliar contexts, it will be up to the person responsible to make the final decision.

Pictogram number 3

Consider high impact 

In cases affecting a person’s livelihood or involving fundamental rights, a human should always make the final decision.

Process recommendation 

A functioning human-in-the-loop process should be documented and embedded in the specialist system. Staff review recommendations, evaluate them, document their decisions, and apply the four-eyes principle for high-impact cases. In addition, a reasoning field in digital forms helps to record deviations transparently.

Important!

Regular training and feedback loops are essential. Only those who understand the limits of AI systems can classify their outputs correctly and take responsibility.

How can risks such as black box and vendor lock-in be minimised?

In many public authorities, AI systems are procured as external solutions or via cloud services. This brings efficiency gains, but also opens up new risks: Black-box systems (which don't provide any knowledge of internal workings) and vendor lock-ins (dependency on a single provider). Both risks can significantly limit control over data, models and further development. 

A responsible procurement process ensures that, even after implementation, public authorities know how an AI system works, who influences it, and how it can be replaced if needed.

Five points that should be included in every tender

1. Agreement on audit clauses: 

Contracts must allow internal and external audits, especially for high-risk applications.

2. Disclosure of model and dataset metadata:

Providers must document which data was used for development, how the model was trained, and which version is currently in use.

3. Secure export pathways: 

Open formats, defined interfaces, and handover rules protect against dependency.

4. Defined service-level agreements with quality indicators: 

Contractual metrics for error rates, availability, response times, and fairness should be binding.

5. Exit strategy provided: 

Every procurement must regulate how systems can be replaced or shut down when a contract ends or in case of non-compliance.

Important!

Procurement is part of governance, not just a purchasing process. Every AI procurement should be reviewed jointly by IT, data protection, legal, and specialist departments. This protects data sovereignty, the ability to switch systems, and the administration’s ability to maintain control.

How can communication prevent acceptance risks?

Even the most technically advanced AI is insufficient if people do not trust it. In public administration, missing or insufficiently prepared information can quickly lead to scepticism or rejection. Trust only develops when processes are explained transparently, participation is enabled, and objections are straightforward. 

Building blocks for trust-building communication 

  • Disclose where AI is being used
  • Name responsible parties
  • Enable objections and questions
  • Address errors openly 

Communication is not a one-off act, but an ongoing process. Acceptance is only created and legitimacy secured when public authorities, technology teams, and citizens remain in communication.

Which KPIs truly measure risk and quality?

AI systems only deliver value if their performance and fairness remain measurable. In public administration, this is crucial to retain trust, efficiency, and legal compliance in the long term. 

Meaningful KPIs go beyond classic model metrics. They show whether an AI system operates fairly, reliably, and transparently, or whether adjustments are needed.

Key metrics in practice: 

  • Error rate: shows how reliably the model performs and whether it has systematic weaknesses. A rate below 2% is often considered a target value.
  • Appeal rate: shows how often decisions are challenged. A low rate indicates fairness and acceptance.
  • Processing time: measures whether AI actually contributes to more efficient workflows or slows processes down.
  • Distribution of impact: checks whether certain groups are disproportionately affected negatively. Outliers indicate potential discrimination.
  • Fairness metric: compares equal treatment across different groups, for example using measures such as “equal opportunity”. Small deviations are normal; larger ones require analysis.
  • Security incidents: capture operational stability. Critical incidents should occur rarely, or not at all. 

These metrics should be collected, documented, and evaluated regularly. It is important that they remain understandable, verifiable, and consistent.

Go-live quick check for AI systems

Before an AI system enters live operation in public administration, it should be checked whether all legal, technical, and organisational requirements have been met and key risks have been addressed. 

Any item answered with “No” indicates that action is required: 

  • Legal basis and purpose are documented.
  • DPIA and bias check are completed and are understandable.
  • A security and governance concept with roles and reporting routes is in place.
  • Human-in-the-loop is bindingly defined, including the four-eyes principle.
  • Monitoring, KPIs, and a fallback option are set up and tested. 

If all items are met, an AI system can be responsibly transitioned into operation.

Frequently asked questions about risks and obligations when using artificial intelligence in public administration

An AI system is considered high-risk if it decides on access to public services, benefits or rights, or significantly influences the behaviour of citizens. Such systems are subject to strict requirements for data quality, documentation, transparency, and human oversight.

NIS2 requires public institutions to implement comprehensive cyber security measures. These include risk and incident management, supply-chain controls, and mandatory reporting of security incidents.AI platforms, cloud services, and interfaces in particular fall under this regulation.

As soon as training data, code, or operating conditions change significantly, a new review is required. Updates from external suppliers can also affect how it works. A defined recertification process ensures that the system continues to operate correctly, securely, and fairly.

Audit clauses, data and model transparency, and export options should be agreed on during procurement. Contracts must ensure that public authorities retain access to relevant information and can replace or shut down the system if necessary.

Transparency builds trust. If citizens understand where AI is used, who is responsible for decisions and how they can lodge objections, acceptance increases significantly. Open communication is therefore a key factor for the success of AI projects in the public sector.

You may also be interested in:

Article
Article