2022. 01. 19.

A Brief Talk with Peter Biegelbauer on the AIT AI Ethics Lab

Peter Biegelbauer

  • Senior Scientist at the Center for Innovation Systems and Policy, AIT Austrian Institute of Technology
  • Coordinator of Co-Change Project
  • Leader of the AIT AI Ethics Lab
  • Research interests: ethical aspects of artificial intelligence, responsible research and innovation, knowledge and expertise in decision-making, policy analysis
  • Teaches at the University of Vienna, University of Applied Sciences FH Campus Vienna, Austrian Federal Academy for Public Administration
  • Visit on LinkedIn, Twitter, ResearchGate, Academia

The Austrian Institute of Technology (AIT) is the largest Austrian research and technology organisation. The majority of AIT’s 1600 scientists is working within technical fields. AIT also features social scientists, including the consortium leader of the Co-Change Project, Dr. Peter Biegelbauer. Together with his colleagues, Peter launched the AIT AI Ethics Lab, a Co-Change Lab, which provides an institutionalised space to reflect on the ethics of artificial intelligence and machine learning. In this way, an interdisciplinary channel of communication and exchange has been established between technological experts and social scientists.

The AI Ethics Lab focuses on the ethical challenges of innovations that build on machine learning and artificial intelligence. What are these challenges or dangers that we need to take into consideration?

At the moment, machine learning is the most advanced branch of artificial intelligence, as it mimics human understanding and learning through algorithms. The discipline is growing and improving very rapidly and requires a huge amount of data. For example, Google captures data from all the people’s switched on mobile phones to suggest an efficient way through a city. If you and others drive and stop somewhere for a bit, it is recognised as a pattern: stop and go traffic. So, Google Maps marks it as orange, and if you stand for longer, it turns red: a full-fledged traffic jam.

For innovative products and services, an enormous amount of data is conserved in a cloud until the end of time or the end of Google - whatever comes first. The only way to keep your data from this cloud, is to prohibit Google to record it, which would impede you from using Google Maps. This means that if you would like to be alerted about traffic jams, you have to permit them to record your location. Consequently, they find out who you are, where your home is, where you work, where you do your shopping, where you go on holiday. This collection of data becomes more and more suitable for predicting what you are going to do next, which is not only crucial to the business model of Google, but also other hyperscaler platforms such as Apple, Facebook/Metaverse, Amazon, where the main concern is: What are you going to buy next? This is what machine learning is mainly being used for by the major platforms.

Several questions regarding privacy and data protection need to be addressed. For example, what they do with this data and how they use it. Selling it to other firms, like companies advertising in all kinds of applications, including your web browser, is extremely lucrative.

Another major issue is bias within predictions and decisions without human intervention. In predictive analytics a central question is: Who are you? Is your income high or low? Are you a man or a woman? What is your education level, religion, personal taste, sexual preference? As an example, Google needs to know whether you are rich or poor when selling your data, because advertisers offer you either a cheap or an expensive computer. This then appears in an ad or on the first hits of your web browser’s search engine. Companies have even started to charge different prices for the same product depending on the price of the device used to surf the internet.

Our decisions often involve prejudices and harmful biases that reinforce stereotypes. Just as we and our activities are not unbiased in the sense of being neutral and objective, neither is the data we produce. What this means is: if no human supervises the data, which is used to train an algorithm, the algorithm is fed with and then necessarily reproduces all the good and bad sides of human judgement, including prejudices or racism. The algorithm learns any values a community shares, which is problematic because we want machines to judge us in a value-neutral way and not to privilege some of us for being white or black, rich or poor.

So, in our AI Ethics Lab, we are dealing with ethics, biases, the GDPR, and privacy and data protection issues in machine learning development. We are discussing what practices developers may want to follow to prevent harmful consequences of these models and algorithms.

Do software engineers tell you their ethical dilemmas?

Here at our institution, many of the technological experts share their ethical concerns with us. We have to keep in mind that we are not working in a commercial enterprise, but in a research-driven environment. At the AI Ethics Lab, our first task was to understand what developers do. While having an abstract understanding of algorithms, we had no idea what the software engineers were doing when developing them. Therefore, we interviewed some of them using the methodology of Erik Fisher (social-technical integration method, STIR) at our first meetings. Only after a couple of months, we started to dig deeper and found that our Lab members belonged to a subset of developers: they are already sensitised to these issues, which was somewhat of a surprise to us. We asked ourselves: “If a sizeable number of developers actually may be already aware of this, why are we having these problems in society?“ When thinking about this, we found that the software engineers’ awareness is not the only issue: in interactions with customers related to applications for a research grant, the developers noticed privacy or ethical issues raised by the technological problems. “To avoid this, we could offer you an extra work package on questions of ethics”, they said. Often enough, they got the answer: “We’re not interested in that. Solve our problem. That’s all.”

In such a prominent institute, like AIT, how can the AI Ethics Lab trigger fundamental change? What are the ways to institutionalise responsible research and innovation (RRI)?

The only way is to listen and to try to understand what developers are doing, how they are doing that, and what they see as a problem. Then, we can try to come up with viable solutions to problems we identify together. In the AI Ethics Lab, we are aiming to build a tool: a set of questions, routines, and practices with which developers can assess their projects. We will continue to listen, use their knowledge and understanding and offer solutions, which they will hopefully profit from. At the AIT, we have seven specialised centers: Energy; Health and Bioresources; Digital Safety and Security; Vision, Automation and Control; Low Emission Transport; Technology Experience and Innovation Systems and Policy. I am located in the latter. All of them have machine learning groups because the technique can be applied in all these fields. Over time, we are planning to reach out to all of these groups.

By raising awareness, do you notice that the software developers are opening up to RRI aspects?

Some people think that research ethics, responsible research and innovation or corporate social responsibility are just a waste of time and hindering them to do relevant research. But the majority of people I have been talking to in research recognise these concepts as important issues. In fact, these topics are also gaining political relevance in Europe: at the moment, the European Union is debating a new set of regulations for the Digital Services Act and the Digital Markets Act. They will regulate the usage of machine learning algorithms at the European level.

The European Union is debating to regulate artificial intelligence technologies. The Commission’s proposal targets the top 10% or 20% of all artificial intelligence applications in terms of high societal risk, i.e. privacy, data protection, discrimination due to bias. Those AI applications will be regulated quite closely and enforced through a process in which you have to prove that the effects of the application do not harm society.

At AIT, we are doing a lot of health-related research. This implies using patient data, though for a reason. The research is concerned with fundamental research questions in regards to far-reaching impact which carry the promise of finding cures for a number of diseases and health conditions. At the same time, they include risks regarding privacy issues and data protection. To our clients we will have to prove that our processes will pass the test of the European Union on digital services. This is a good reason to think about research ethics! Yet, apart from this, it is nice to see that with many people at AIT I don't even need to have that argument. They recognise the importance regardless of the upcoming regulation.

Your Lab has just started a dialogue with civil servants. Why is this important?

One of the most critical parts of the state is the civil service. The government can issue decrees, the parliament can pass laws, but in the end, the civil service is taking care of the implementation. Additionally, the civil servants are the direct representatives of the state in our everyday lives. Moreover, I do not meet the chancellor or any of the ministers on a daily basis, but I meet civil servants almost every day, such as police officers, teachers, social workers, tax office employees, and many more.

When the state applies artificial intelligence technologies in its services, they will be operated by the civil service, which raises questions of ethics, bias, data protection and privacy. Therefore, the civil servants must be aware not only of the promising aspects of artificial intelligence, such as the simplification of many facets of their work, but also of the risks these technologies carry and the need to regulate them.

One of your change Lab’s goals is to widen your impact outside of AIT. Where are you taking this path besides working with civil servants?

We have collaborated with the Austrian RRI platform, which includes all the large research institutions and universities from Austria. The platform hosted a workshop on RRI-inspired institutional change, specifically on how to make institutional changes regarding AI and RRI. This platform certainly has a multiplier effect on the national level.

What if AIT makes a fundamental institutional change in terms of RRI? Can it affect the national level?

Let’s start with the international level: AIT is the most successful Austrian institution in the European Framework Programmes for Research and Technological Development, which means that we have the largest number of projects in the country’s framework programme projects. If researchers adopt certain practices of AI ethics and RRI and then participate in international programmes, these can multiply their experiences, which would have an impact on an international level. On the national level, AIT can have a substantial impact as we are one of the biggest partners of the Austrian government in technology research. So, if our developers keep asking for ethics-related aspects in their projects, at one point their clients may realise that there is something to research ethics and responsible research and innovation. I am deeply convinced that change more often starts from below than from above!