A man works on his laptop and smiles.

What types of artificial intelligence are there, and what can they do in practice?

Explained briefly: 

  • Not all artificial intelligence is the same. There are different types of AI with clearly defined capabilities.
  • Most AI systems in use today, including chatbots and LLMs, fall into the “limited memory” category.
  • Classification helps: Anyone wanting to use AI strategically, lawfully, and effectively must know which type they are dealing with.

Why not all AI is the same

The term “artificial intelligence” (AI) is ubiquitous, yet often misunderstood. AI is not a single uniform system, but rather an umbrella term for a wide range of technologies, methods, and applications. While some AI models can independently generate texts, others can only produce predefined responses. In order to sensibly assess opportunities and risks, it is important to know the different types of AI. 

This distinction is particularly crucial in public administration: Not every AI is suitable for every task. Not every AI may legally be used. Being able to classify AI systems lays the foundation for informed decisions, regulatory compliance, and fit-for-purpose applications.

Systematic classification: the four types of AI

A common classification distinguishes four types of artificial intelligence based on their level of cognitive capability – i.e. how they learn, reason, and understand.

Reactive AI (Type 1)

  • Responds exclusively to current inputs, with no memory or learning abilities
  • Works in a rule-based way and cannot store experiences
  • Example: Chess computers, simple automation systems 

Relevance for public administration: relatively low. Reactive systems are rarely suitable for complex tasks requiring context or variability.

Limited memory (Type 2) 

  • Uses past data to make decisions or generate predictions
  • Typically associated with machine learning (ML) or deep learning
  • Examples: Language models such as ChatGPT, image recognition, recommendation systems 

This type of AI is by far the most widespread today and is also increasingly being used in public administration: 

  • Automated text capture and classification
  • Chatbots for communicating with citizens
  • Forecasting models for staffing or planning issues 

Note: Even if these systems seem impressive, they have neither “understanding” nor genuine consciousness. They recognise patterns, not meaning.

Theory of Mind (Type 3)

  • Hypothetical AI that can comprehend human emotions, intentions, or thoughts
  • Recognises not only data but also the “intentions” behind actions
  • Currently purely theoretical and subject to extensive ethical and regulatory debate 

Practical question: Would such an AI even be useful in public administration? And how could control, accountability, and transparency be ensured?

Self-aware AI / superintelligence (Type 4) 

  • Even more hypothetical than type 3: an AI that is self-aware and pursues its own goals
  • Often the subject of science fiction, for example in films or philosophical debates
  • No research has reached this point; it is a vision of the future 

Nevertheless relevant: This type of AI is at the heart of long-term governance debates (e.g. in connection with the AI Act) and broader societal questions about technological development.

How does AI work?

In addition to the cognitive categorisation, AI can also be distinguished by its mode of operation or complexity.

Symbolic AI vs neural networks

PIctogram Planning

Symbolic AI:

is based on fixed rules, logical inference and knowledge bases (classic)

Pictogramm AI

Neural networks: 

learn from large volumes of data, in an abstracted way similar to the human brain (modern)

Relevance in practice: Symbolic systems are suitable for the likes of rule-compliant processes (such as control logic, where fixed rules and decisions are predefined), while neural networks are suited to unstructured data (such as images and language).

Narrow AI vs. General AI 

  • Narrow AI (weak AI): is specialised in a clearly defined task (e.g. text recognition)
  • General AI (strong AI): can solve a range of tasks flexibly, comparable to human intelligence 

Note: All AI applications currently available, including powerful LLMs, are considered Narrow AI. General AI remains theoretical to date.

Good to know:

What is an LLM (Large Language Model)? 

An LLM is an AI system trained on vast quantities of text and able to generate, analyse, and understand language, at least in statistical terms. Examples include GPT-4 and BERT (an AI model by Google). They belong to type 2 AI (“limited memory”) and are considered powerful, but not “conscious”, tools. In public administration, LLMs can, for example, help analyse documents, classify inputs, or automatically respond to citizen enquiries.

AI and its significance for public administration

For public authorities, it is crucial to understand what type of artificial intelligence they are dealing with. This is because not every AI technology is suitable for use in sensitive administrative processes, and not every one is legally permissible.

  • At present, public administration almost exclusively uses AI systems from the “limited memory” (Type 2) category. This includes, for example, large language models (LLMs) such as ChatGPT. These systems are based on past training data and can use it to recognise patterns, make predictions, or generate text. They are powerful, but not conscious, not autonomous, and not “understanding”.
  • The more advanced types 3 (“Theory of Mind”) and 4 (“self-aware AI”) currently exist only as theoretical concepts or long-term research ideas. They are currently not market-ready, not regulated, and therefore neither relevant nor usable for day-to-day administrative practice.

Especially in the context of regulatory issues, such as the EU AI Act, which classifies AI systems into different risk categories, assigning a system to a specific AI category is very important. It helps to 

  • assess risks realistically,
  • define requirements for transparency and control, and
  • establish appropriate testing and governance mechanisms. 

Anyone making decisions about AI applications in public administration should understand the functional principle on which the system is based. Only then can the limits of use, risks and potential be assessed soundly and decisions made with confidence.

Practical example: Chatbot for citizen enquiries at the local citizen service centre

Initial situation: A local authority wants to introduce a digital assistant that automatically answers frequently asked questions from citizens about registration certificates, passports, or opening hours.

Type of AI used: Type 2 – limited memory 

The chatbot is based on a trained language model that has analysed previous user enquiries. It recognises patterns, can provide suitable answers, and learns with each interaction, but without its own understanding or contextual awareness.

Opportunities: 

  • Reducing the workload for caseworkers when dealing with standard enquiries
  • 24/7 availability for citizens
  • Improved service quality through rapid responses

Risks and limitations: 

  • No deeper understanding of complex issues
  • Incorrect or misleading answers where wording is unclear
  • Data protection requirements when processing personal data

Conclusion: 

Because it is clear that this is type 2 AI, the authority can take targeted measures for quality assurance, data protection checks and user guidance, and integrate the chatbot effectively into day-to-day work.

Conclusion: Clarity provides direction

Categorising AI by types and modes of operation may seem theoretical at first glance, but it is extremely helpful in practice. Because: 

  • Those who know the type of AI being used can set realistic expectations.
  • It is a prerequisite for legally compliant use, for example under the EU AI Act.
  • It helps to build governance structures, for example through internal review processes or centres of expertise. 

The type of AI determines what it can do and what it may do. Those who understand this make better decisions.

Article