Vortrag

Privacy and Ethics in Highly Automated Driving

As a programmer, what do I do if my work is about to make the jobs of 500 people redundant? We were asked questions such as this one when we presented our work from the experiment KoFFI at BSides, Stuttgart's first security conference. As a follow-up to our talk at ASRG-S, Susanne Kuhnert and me were invited to discuss ethical aspects of highly automated driving with practitioners. Already in another talk on "Autonomous Security" one of the ethical questions — if not the ethical question — that also gets much attention in the media was mentioned: "who should die in a car crash?" (Awad et al.). It is being discussed, that autonomous cars are supposed to be programmed to solve this problem before they will be driving on our streets.

Autonomous driving is one area, in which the question of "moral machines" arises: is it possible to implement ethics into machines and robots so that they behave ethical by default and can calculate what is ethically right on their own? A general question behind this would be whether it is "rather ethical to not design an ethical machine since it could be easily turned into an unethical one at any time" (Vanderelst/Winfield 2016).

A study in which users could solve moral dilemmas has been conducted by the MIT. On their website users can click through images in which they can choose their preference into which direction the autonomous vehicle should swerve the moment a collision becomes unavoidable. In 2018 the results of the experiment have been published and have shown cultural differences between peoples' preferences but also that there are universal ethical values, common to all human beings across the world.

This result undermines one assumption that we are making in KoFFI, too: even though different cultures are setting different ethical rules, there are universal ethical standards. To grasp them one can consider different values that need to be considered while developing technology. Values can guide our actions, construct reality and can be reasons to act or not to act in a certain way (Grimm 2013). Friedman and Borning state: "A value refers to what a person or group of people consider important in life" (Friedman/Kahn/Borning 2006). To identify important values in KoFFI we have asked our research partners which values they consider as central for highly automated driving. The answers were: Autonomy, privacy, security, trust, integrity, (system) transparency. Especially privacy is a value that comes to mind when thinking about connected cars that collect, process and store data about their users.

The project KoFFI is concerned with automation level 3-4 and not yet with fully autonomous cars. However, we do consider the trolley problem as an important thought experiment. It has to be examined preciseley because it is an "edge case" (Lin 2017). The work package for HdM comprises "Ethical, legal and social questions" of highly automated driving. Since the beginning of the project in 2016 we have been closely working together with our project partners. One of our aims in KoFFI is ethical awareness-rising within our team. It has been very interesting to be able to discuss specific ethical questions with our research partners. To frame our research we have also conducted narrative interviews in which the interview partners told us about their personal experiences with cars, driving and mobility. Narrative interviews are a method to avoid "socially desired responses" (Müller/Grimm 2016). Furthermore, we have interviewed the persons who tested the KoFFI-Software in the driving simulators and have found their views on "trusting" machines and autonomous cars very insightfull while drafting the KoFFI-Code: Ethical guidelines for highly automated driving.

In the past few years many ethics-guidelines have been formulated.
The Guidelines (Ethik-Regeln) of the "Ethik-Kommission automatisiertes und vernetztes Fahren" of the Bundeministerium für Verkehr und digitale Infrastruktur (BMVI) have been published in 2017. The High Level Expert Group on AI: has published "Ethics Guidelines for Trustworthy AI" in April 2019. As a practical tool for ethical decisions concerning data, the Utrecht Data School has formulated a Data Ethics Decision Aid.

We at KoFFI think that guidelines should and can:

  1. Guidelines should be developped together with all stakeholders while being careful that this doesn't mean that certain stakeholders, namely enterprises and companies, are using this way to implement what is best for them.

  2. Ethics should be considered from the beginning on, throughout the process and even if the product is ready or the process has finished. If necessary, the product needs to be changed even if it is already on the market. Due to societal change, it might even be necessary to change the guidelines.

  3. The Ethical Evaluation and monitoring of a research and development process should try to avoid paternalism. It should aim at convincing the responsible persons to behave ethically, to consider ethics and possible ethical issues and, if possible, apply ethics.

  4. In the so-called AI-race it has been discussed that ethics might represent a "selling point" for products made and software programmed in the EU. In relation to this, ethics don't need to be and shouldn't be conceived of as an "obstacle" to do successful business but rather as an advantage for all humankind.

 

In KoFFI, a first set of rules has been formulated by Susanne Kuhnert and comprises the following principles:

1. Technology is never neutral

(Designing) Technology is never neutral. Designers, engineers, software developers, coders, ... should become aware of values they consciously and unconsciously, implicitely and explicitely express with and through their design and which values they actually want to promote. Especially in an intercultural context values should be reflected since technology and design are of socio-political relevance.

2. "Justice as fairness"

Automated technologies should respect justice and be careful to not promote social discrimination. John Rawls's principle "justice as fairness" offers a perspective to judge just distribution, by regarding things through the so-called "veil of ignorance". This way, neither one’s social status nor capabilities must influence a just order.

3. (Inter-)national laws and human rights

All developments must deal with the applicable national laws as well as the international human rights. While a law describes direct prohibitions, a right is something that can be claimed by every individual but that does not have to be claimed. Mobility is an individual right and it is related to other individual rights, such as liberty rights and the right to private property. However, a right should not need to be claimed but should be respected by design right from the beginning of the design process. At least from the moment on, in which a design or a product is concerned with or changes laws and rights, it gets a political dimension.

4. Human autonomy / moral autonomy

The autonomy of a human being has to be respected especially in regard to their moral autonomy. Moral decisions should never be forced upon a person through technology and there must always be enough freedom for the individual to make their personal decisions. A manipulative technology is not desirable from an ethical point of view. Technology should not pass judgment on a human and should never force a person to judge in a certain way. Moral autonomy is a precondition for the right to a life in freedom.

5. Sincerity and transparency

All processes should be accompanied by the value of sincerity. Without sincerity there is no true sense in claiming transparency.

6. Protecting the future, nature and human life

Innovations are aiming at shaping the future. This is why the future needs to be protected by innovations. Especially nature and human life should be thought of as vulnerable entities that should be respected and rightfully protected in all design and development processes.

7. Human-technology cooperation

Cooperation between human beings and technology shall respect the value of solidarity. Especially in regard to the skills humans need in order to be able to handle machines in the future. All human beings in all social positions must be able to learn these skills to a certain degree. Technology should not have a negative impact on solidarity among humans.

As a next step we are thinking of the concrete application of these general questions to highly automated driving and will draft a checkbox-list which we will discuss with the other partners in KoFFI, to implement ethics and privacy "by design" into the KoFFI-prototype.

Mitvortragende: Susanne Kuhnert
Vortrag auf Veranstaltung: BSides
Veranstaltungsort: Stuttgart
Datum: 25.05.2019 bis 26.05.2019

Weiterführende Links:
Institut für Digitale Ethik


Autoren

Eingetragen von


Mehr zu diesem Autor
Sie haben eine Frage oder einen Kommentar zu diesem Beitrag?