2021. 09. 08.

A Brief Talk with Nina Rilla

Nina Rilla

In your Co-Change Lab, you are focusing on human aspects of the autonomous systems and you are working with ecosystem actors around autonomous systems. Who are the ecosystem actors you have reached?

The RAAS ecosystem is a fairly big ecosystem, there are actually tens of actors who we are connected. There are universities, research organisations, private companies who make research & development (R&D) in the field of autonomous systems. It has both domestic and international partners. Overall, the ecosystem consists of over 200 researchers.

You have reached them directly as a Co-Change Lab. What does that mean?

Our Lab has been involved in one of the research task forces within the ecosystem which is related to ethics (the formal name is Ethical, Acceptability, Desirability and Impact Assessment). And what it means in practice is that this task force has for example organized different events related to ethics and artificial intelligence. It has built responsibility-related roadmaps for internal and external use for the ecosystem partners, and developed approaches and procedures to assess impacts of autonomous systems, in particular social and ethical impacts. The main idea of the whole ecosystem is to bring different R&D actors together, to facilitate partnerships and build and develop new projects that would be funded from a Finnish or some international funders.

And what are your experiences with companies? What does ethics mean for a company that works on new technologies?

Well, we haven't been directly involved with the company projects, but what I have learned through various workshops and events where we have had speakers from the big companies, is that their main concern related to artificial intelligence and autonomous systems is that people are very concerned of collection of data and the use of data. And that's usually where the ethics discussion centralizes: what kind of data the systems are able to collect and store, and how this data is going to be used? So this is the main concern of the companies as well as users. The same question comes up in a context of autonomous trams which is a new autonomous system domain that we recently got engaged. I understand that users are concerned of privacy but I would love to see the ethics discussion to broaden other areas as well.

The autonomous trams of Tampere city.

The tram actually started running on schedule in early August. So it's brand new. And yesterday we held the first citizen engagement workshop event. We were discussing values and how the passengers feel when more and more autonomy is introduced to the tram. And of course, concerns are again much related to the data collection and use but also to safety. What kind of a data is collected from the passengers and how they are monitored? How safe is the tram when there is no driver? Monitoring, and especially facial recognition, raises questions of integrity, dignity and equality. But this is such a wide field, I mean ethics and responsibility are quite context specific things even though there are common themes, like data and privacy. So when we are talking about autonomous cars and smart transport, there are different questions, when we talk about drones, there are other questions, and autonomous ships deal with specific seafaring related ethical questions. Important it is however that discussion of ethics and responsibility arise in different fields of the society as they are really important themes in creating acceptance to future artificial intelligence driven solutions.

Let's talk about your internal work. You take part in internal responsibility program development of VTT. But what kind of role do you play in this program?

We are kind of advisers for our management: those who are responsible for developing the responsibility program, and implementing the activities within VTT. We have for example organised internal co-creative workshops.

Does it mean that your managers will create new institutional guidelines and practices in terms of responsibility? And does it mean that the whole organization should act accordingly?

Yes. There are already activities which are implemented. For example, we have new gender equality guidelines which are created for the new Horizon Europe projects, and an internal permanent training module about responsibility for researchers to increase understanding of responsibility beyond research ethics. VTT is also developing a sustainability index, and working to integrate SDGs in operations. So gradually there will be introduced different kinds of activities related to responsibility. We are currently building a roadmap for launching these new activities.

Co-Change Project is working on institutional changes, but this is only a three-year project. Do you think that you can make real changes at your organization during such a short period of time?

I hope but I don't believe that three years in many contexts is enough. Since my organisation, VTT has a good flow and serious approach to responsibility and sustainability issues (which by the way means social sustainability as well), they can definitely go far in three years but institutional change in an ecosystem context, like RAAS, is more demanding. Practically you need to create change project by project. For me, change is something that is continuous. It is not something that you reach with one intervention and then suddenly something changes, it doesn't go like that but you need sustained awareness raising activities. I hope that responsibility in general would have the same kind of role in the projects like gender equality from the side of the European Commission. I hope that people would be forced to think that how responsibility and ethics are integrated in their project, in all the activities of the project. But I think this change takes years, and it should never be discontinued. It comes finally through when things become compulsory. Forces towards sustainable and responsible business and research are definitely on the move already.