Explainable Robot Navigation
As the use of autonomous mobile robots expands into dynamic and complex environments, the need for them to provide understandable explanations for their actions becomes crucial. This thesis addresses the challenge of developing explainability for robot navigation by leveraging a hybrid model that combines machine learning techniques with symbolic reasoning methods. Furthermore, the thesis explores the modeling of human explanation preferences and the impact of different explanation attributes on explanation recipients' understanding, satisfaction, and trust. The goal is to integrate different explanation aspects and approaches into a unified framework to support explainable navigation in robotics.