Ethical guidelines for AI

Headergrafik KI Internview

The German government's Data Ethics Commission is currently drawing up recommendations for action on the ethical handling of artificial intelligence. We interviewed co-spokesperson of the Commission Prof. Dr. Christiane Wendehorst about the tasks and challenges and asked her to tell us what can be achieved with national initiatives if ethical principles play a subordinate role in the federal states.

Interview with Prof. Dr. Christiane Wendehorst about trademarks and competitive advantages

Prof. Dr. Wendehorst, artificial intelligence is an important technology topic of our time and one that is now often discussed in Germany in conjunction with ethics. In your point of view, why do AI and ethics belong together?

Prof. Dr. Christiane Wendehorst: Artificial intelligence – in short: AI – is a technology that will open up completely unimagined possibilities. This will create new opportunities, but also dangers. Ethics can have a formative effect here, so that instead of falling into a 1984 kind of dystopia, the new possibilities can be put to work for the benefit of the general public and the individual. This will not happen without deeper ethical reflection on what we consider desirable as a society or without powerful implementation. 

Can a technology like AI be unethical in itself?

C. W.: No, because AI is a machine, intelligent software. The technology in itself cannot act unethically. But how it is used can be ethical or unethical. This is why those who develop artificial intelligence need to think about the possible application scenario at the same time. For instance, we have to ask ourselves: What actions can be performed by a machine which up to now were carried out by people? And we also have to ask ourselves about where we want to achieve more objectivity through AI. Unfortunately, we do not have a perfect analog world where the decisions made are entirely excellent. This would certainly be a distorted view. AI can, of course, be used to improve processes, for example, for pattern recognition of diseases in medicine or quite simply for a more sensible distribution of kindergarten places.

The federal government understands this and has set up a Data Ethics Commission. What are the tasks of this Commission?

C. W.: The German government has provided us with a long list of questions on three major issues. First of all, algorithmic decision-making, secondly, artificial intelligence, and thirdly, data handling. For example, it is a question of how terminal devices must be designed for consumers in order to guarantee digital self-determination and comprehensive protection of fundamental rights. Or, what a data access regime must look like in modern value creation systems in order to guarantee fair data access for all parties that meets the needs of both the individual and the general public. In other words, it's about questioning the relationship between data protection and the need to use data to achieve new research results. Or, what happens to the digital data of deceased persons?

Just because we don’t see something doesn't mean we should think it doesn't exist.
Prof. Dr. Christiane Wendehorst

These issues are extremely broad and we are working on the questionnaire with a holistic approach. Discussions are being held in networked form, use cases are being viewed and analyzed so that we can derive recommendations for action. We are at present in a rather horizontal phase and are trying to formulate more general principles, but also concrete recommendations for action for the German government. In our view, the aim is to set out the concrete measures that need to be taken to ensure that our digital future is ethically sound. We have a very ambitious timetable and the result of our work is to be presented to the public on 23 October.

What kind of trap would we fall into if we failed to address this issue so comprehensively?

C. W.: Due to the rapid pace at which AI is developing, we as a society cannot yet assess exactly which role it will play and where. The individual is not yet aware of the impact of AI on his or her everyday life. After all, the effects are usually invisible to the individual. The majority of people recognize the benefits of fantastic technological developments. The price that we may have to pay, however, is not always immediately apparent. But just because we don't see something doesn't mean we should think it doesn't exist. That's why we have to deal with it sooner rather than later.

What is your biggest challenge at the moment?

C. W.: The biggest challenge for the entire Data Ethics Commission is to find out which issues need to be regulated – not only today, but also which areas will be a problem tomorrow and the day after that. Every legislative process has a long process of development and, at the current pace of technological development, we must take care to ensure that the guidelines are not already obsolete when they are drawn up. We have to find a balance, i.e. we not only have to look far enough into the future and word these guidelines as technology-neutral as possible, we also have to be specific enough to meet today’s requirements, so that the German government can work sensibly with our recommendations for action.

The Data Ethics Commission is made up of experts from a wide range of disciplines. Is this more of an advantage or a disadvantage in light of the tasks to be mastered?

C. W.: In my view, the interdisciplinary nature of the Data Ethics Commission is something very special and a great strength. The Commission includes ethicists, lawyers and technicians, but also members from other disciplines, such as economics or theology. This is challenging, because of course everyone has their disciplinary glasses on, leading to a gathering of different perspectives and methods. It took us a while to understand each other. But the Commission is filled with fantastic people who are all willing to listen and adapt to each other. We have all come to experience interdisciplinarity as a great added value. And this interdisciplinary discourse is urgently needed, especially when it comes to AI and ethics.

The private sector too has also addressed the issue. In the KI Bundesverband e.V. association, for example, 50 companies have agreed on a seal of approval; SAP has set up its own ethics advisory board. Do you think initiatives like these make sense?

C. W.: Yes, they make a lot of sense. I think the only way forward is for tech companies, in particular, to attach greater importance to the subject of ethics. It must be seen as a top priority. Ethics must be considered at all levels and in all phases of product development and distribution. However, these initiatives cannot entirely replace binding and enforceable regulation. We must certainly proceed very cautiously here, because over-regulation would be the last thing our European data economy needs. That being said, however, it will not work without regulation. The answer therefore is that such initiatives are, in my view, essential and a very important driver, but they must not be seen as a substitute for any binding requirements.

AI development and use are global issues. What can national initiatives achieve if hardly any or no ethical principles are established in countries like the US or China?

C. W.: I think that ethical guidelines can also serve as a trademark and provide a competitive edge. In the US, people are extremely alarmed and it is precisely there that a clearer awareness of these issues will develop. The situation in China may be a bit different. With the state-organized scoring system, there is of course a fundamental difference in mentality, but I expect to see these questions also becoming more relevant there. Experience in other areas proves that formulating high standards does not have to lead to a competitive disadvantage. It can also lead to an enormous boost to innovation for the economy developing these ethically sound technologies. I am among those who are very optimistic that ethics can also pay off economically.

Thank you very much for the interview, Professor Wendehorst!

Results now published

On 23 October 2019, the Data Ethics Commission (DEK) submitted its report to the German government. This report presents 75 concrete recommendations for dealing with data and algorithmic systems at national, European and international level. The results of the work by DEK are published on the website of the Federal Ministry of the Interior, Building and Community (BMI).

About Prof. Dr. Christiane Wendehorst

Christiane Wendehorst has been a Professor of Civil Law at the University of Vienna since 2008. Prior to that, she held chairs in Göttingen and Greifswald and was managing director of the German-Chinese Institute of Law.

Prof. Dr. Christiane Wendehorst

Prof. Dr. Christiane Wendehorst is co-spokesperson for the German government’s Data Ethics Commission.

She is a founding member and President of the European Law Institute (ELI), chair of the Academic Council of the Austrian Academy of Sciences (ÖAW), chair of the Civil Law section of the Austrian Jurists’ Forum (ÖJT), co-chair of the German Data Ethics Commission and member of the Academia Europaea (AE), the International Academy of Comparative Law (IACL), the American Law Institute (ALI) and the Bioethics Commission at the Austrian Federal Chancellery. Her current work focuses on the legal challenges arising from digitalization, she has worked as an expert on subjects such as digital content, Internet of Things, artificial intelligence and data economy for the European Commission, the European Parliament, the German Federal Government, the ELI and the ALI, etc. Prof. Dr. Wendehorst is married and has four children.

Expertentipp DSGVO
Expertentipp DSGVO
Digital work

General Data Protection Regulation