Hélène UNREIN will defend her thesis on December 15th, 2023 at 10:30 a.m., (Petit Amphi – ENSC)) on the subject : “Evolution of autonomous systems: from tool to teammate in HAT teams”.
This thesis focuses on the improvement of autonomous systems in the Human-Systems teams. Current work in the domain highlights the challenge represented by the evolution of autonomous systems, from the status of a simple tool to that of a teammate. More specifically, we are interested in how autonomous systems take into account the mental states of humans. The ability to infer mental states would enable the system to evaluate, interpret, predict and anticipate human behaviour. Our work is organised around a model representing the activity of the Human-Autonomous System team. We propose three types of decision distribution : central, distributed and common. These are discussed in three separate studies. Through these application cases we attempt to find solutions that will improve the autonomous system’s ability to take account of human mental states. The first study looks at monitoring (eye-tracking) of a human agent. We identify variables that can be used to distinguish cognitive fatigue from hypovigilance, even though these two mental states have similar symptoms. Thus, in our use case, a level 4 autonomous vehicle will be able to estimate the state of the human agent and respond appropriately. The second study is concerned with the impact of the initial trust placed in the autonomous vehicle by drivers of conventional vehicles. By anticipating the risky behaviour of human users, autonomous systems will perform better and be more accepted. In addition, the introduction of autonomous vehicles will raise road safety issues. This study shows that initial trust in autonomous vehicles leads drivers of traditional vehicles to behave more risky. This observation also highlights the importance of communication between autonomous vehicles and other road users in order to reduce the risk of accidents. The final study aims to understand the link between performance and trust. The case is theoretical to date, with the human agent and the autonomous system having equivalent prerogatives (no hierarchy between the two agents). Our results show that trust is more closely linked to the perceived competence of the system than to the real or perceived team performance. These results lead us to identify a set of improvement directions for autonomous systems and to propose a new model of trust dynamics based on the expectations of the human agent. Overall, our thesis contributes to improving mutual knowledge within the human-autonomous system team.
If you have a request or questions about the laboratory, please contact our team.