Automation transparency is a means to provide understandability and predictability of autonomous systems by disclosing what the system is currently doing, why it is doing it, and what it will do next. To support human supervision of autonomous collision avoidance systems, insight into the system’s internal reasoning is an important prerequisite. However, there is limited knowledge regarding transparency in this domain and its relationship to human supervisory performance. Therefore, this paper aims to investigate how an information processing model and a cognitive task analysis could be used to drive the development of transparency concepts. Also, realistic traffic situations, reflecting the variation in collision type and context that can occur in real-life, were developed to empirically evaluate these concepts. Together, these activities provide the groundwork for exploring the relation between transparency and human performance variables in the autonomous maritime context.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.