How to design robots that we trust in interaction?
An autonomous system is a machine or software capable of operating in and interacting with an environment, without direct and continuous human control. While we have been developing systems with various degrees of autonomy for the past fifty years, these have predominantly operated in restricted environments (such as production lines) or have had very limited functionality (such as vacuum cleaning robots). The development of new classes of autonomous systems, such as medical robots, “driver-less” cars, and assistive care robots has opened up questions on how we can integrate truly autonomous systems into our society. We are primarily concerned with two issues that are crucial to the future acceptance and use of autonomous systems: ethics and trust. Once a system is truly autonomous, i.e. learning from interactions, moving and manipulating the world we are living in, and making decisions by itself, we must be certain that it will act in a safe and ethical way, i.e. that it will be able to distinguish ‘right’ from ‘wrong’ and make the decisions we would expect of it. In order for society to accept these new machines, we must also trust them, i.e. we must believe that they are reliable and that they are trying to assist us, especially when engaged in close human-robot interaction.
Doctoral College TrustRobots
Relevant Related Publications on Trust & Robots
Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, S. 3–12, Association for Computing Machinery, Cambridge, United Kingdom, 2020, ISBN: 9781450367462.
Exploring Trust in Human-Agent Collaboration Inproceedings
Proceedings of 17th European Conference on Computer-Supported Cooperative Work, European Society for Socially Embedded Technologies (EUSSET) 2019.
Ethics and Trust: Principles, Verification and Validation (Dagstuhl Seminar 19171) Inproceedings
Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik 2019.