![ ](../assets/banner2.png width="480px" align=left)   [Home](../index.html)  [Speakers](../speakers.html)  [Program](../program.html)  [Call for Papers](../call.html)  [Accepted Papers](../papers.html)  [Organizers](../organizers.html)
![ ](./photos/morgan.png width="210px" align=left) [Phillip Morgan](https://profiles.cardiff.ac.uk/staff/morganphil) holds a Personal Chair in Human Factors and Cognitive Science within the School of Psychology at Cardiff University. He is an international expert in human centred design of AI and automation, HMI design, HCI, HRI, intelligent mobility, and Cyberpsychology. With >50 grants (>£37m UK) mainly from research councils, industry, government, and >100 publications, Prof Morgan has often challenged the ways in which some approach technological development, advocating human-centered design and a design for failure approach to better achieve success in terms of acceptance, adoption and safe and continued use of current end emerging technologies. At Cardiff University, Prof Morgan is Director of the Human Factors Excellence Research Group (HuFEx), Director of Research within the Centre for AI, Robotics an Human Machine Systems (IROHMS), and Theme Lead for Human Factors within the Digital Transformation Innovation Institute and Co-Lead for Transport. He is Guest Professor at Luleå University of Technology, Psychology, Division of Health, Medicine and Rehabilitation, Sweden, and, Director of the Airbus Centre of Excellence in Human-Centric Cyber Security (H2CS) – and has been seconded part-time to Airbus for more than 5-years. Recent research projects include: project Rule of Law in the Age of AI: Distributive Liability for Multi-Agent Societies (ESRC-JST, with collaborators at the Universities of Kyoto, Osaka and Doshisha); developing Human Factors Guidelines for Robots and Autonomous Systems (HSSRC); and Artificial Intelligence for Collective Intelligence (AI4CI https://ai4ci.ac.uk/): a ~£12m (UK) hub – where he Co-Leads leads a cross-cutting Human-Centric Design theme spanning other core themes (e.g. Environmental Intelligence, Financial Technology, Smart City Design). **Title:** “Talk to me KITT…Explain it Buddy”: The Efficacy of Robot and Dashboard Self-Driving Car Informational Assistant on Trust and Blame Following an Accident **Abstract:** Despite the increasing sophistication of self-driving cars (SDCs), there will be instances where accidents occur – e.g. caused by action(s) of third parties. An SDC may not always be able to stop in time, and even when it can – the passenger(s) and other road users could be in danger. This presents a number of dilemmas - not only moral but also in relation to factors such as trust – that can impact acceptance, adoption and continued usage of technology. Thus, it is key to investigate ways to minimize a loss of trust in SDCs under conditions involving accidents – that will unfortunately but inevitably occur. One possibility is to implement informational assistants that keep occupant(s) informed during journeys. A growing body of research supports positive effects of HMI based informational assistants on trust (e.g. Waytz et al. 2016) and this has been extended to robots (Lee et al. 2019, Wang et al., 2021), albeit under conditions where an SDC is not involved in an incident. During this workshop talk, I will discuss a paradigm developed to further investigate the efficacy of robot and non-robot informational assistants on trust and blame in SDCs prior to and following an accident and present example findings from studies conducted online and in person. I will also discuss how dialogue style may help or hinder trust under different conditions, as well as the degree of explanation offered. I will argue that the benefits of a robot informational assistant within accident scenarios, at least, are limited, perhaps because of expectations surrounding capabilities and discuss possible factors to consider in their future development.