iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: http://pubmed.ncbi.nlm.nih.gov/35573901/
Self-Explaining Social Robots: An Explainable Behavior Generation Architecture for Human-Robot Interaction - PubMed Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Apr 29:5:866920.
doi: 10.3389/frai.2022.866920. eCollection 2022.

Self-Explaining Social Robots: An Explainable Behavior Generation Architecture for Human-Robot Interaction

Affiliations

Self-Explaining Social Robots: An Explainable Behavior Generation Architecture for Human-Robot Interaction

Sonja Stange et al. Front Artif Intell. .

Abstract

In recent years, the ability of intelligent systems to be understood by developers and users has received growing attention. This holds in particular for social robots, which are supposed to act autonomously in the vicinity of human users and are known to raise peculiar, often unrealistic attributions and expectations. However, explainable models that, on the one hand, allow a robot to generate lively and autonomous behavior and, on the other, enable it to provide human-compatible explanations for this behavior are missing. In order to develop such a self-explaining autonomous social robot, we have equipped a robot with own needs that autonomously trigger intentions and proactive behavior, and form the basis for understandable self-explanations. Previous research has shown that undesirable robot behavior is rated more positively after receiving an explanation. We thus aim to equip a social robot with the capability to automatically generate verbal explanations of its own behavior, by tracing its internal decision-making routes. The goal is to generate social robot behavior in a way that is generally interpretable, and therefore explainable on a socio-behavioral level increasing users' understanding of the robot's behavior. In this article, we present a social robot interaction architecture, designed to autonomously generate social behavior and self-explanations. We set out requirements for explainable behavior generation architectures and propose a socio-interactive framework for behavior explanations in social human-robot interactions that enables explaining and elaborating according to users' needs for explanation that emerge within an interaction. Consequently, we introduce an interactive explanation dialog flow concept that incorporates empirically validated explanation types. These concepts are realized within the interaction architecture of a social robot, and integrated with its dialog processing modules. We present the components of this interaction architecture and explain their integration to autonomously generate social behaviors as well as verbal self-explanations. Lastly, we report results from a qualitative evaluation of a working prototype in a laboratory setting, showing that (1) the robot is able to autonomously generate naturalistic social behavior, and (2) the robot is able to verbally self-explain its behavior to the user in line with users' requests.

Keywords: autonomous explanation generation; explainability; human-robot interaction (HRI); interaction architecture; social robots; socio-interactive explanation generation; transparency; user-centered explanation generation.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

Figure 1
Figure 1
Interaction situation in which a puzzled user is looking for a behavior explanation by the robot.
Figure 2
Figure 2
Overview of the proposed framework for robot self-explanations in social human-robot interaction.
Figure 3
Figure 3
Flow model of explanation dialogs in socio-interactive human-robot interaction.
Figure 4
Figure 4
Schematic view of the explainable interaction architecture for an autonomous self-explaining social robot. The components are marked using rectangles and the most important communication interfaces between components are indicated by black arrows. Dotted arrows are used to enhance readability and have the same meaning as solid arrows.
Figure 5
Figure 5
State machine describing possible interaction modes along with their transitions.
Figure 6
Figure 6
Example Behavior Tree for StrategyInitiateContact: The octagons represent selector nodes, rectangles represent sequence nodes, ellipses represent condition nodes, and rounded rectangles represent action nodes.
Figure 7
Figure 7
The content of ExplanationInfo (middle column) and the source messages from which the relevant information are extracted (right column).
Figure 8
Figure 8
A sequence diagram showing the interaction between different components of the architecture to initiate a Strategy (high-level socio-interactive behavior) and simultaneously extract the information necessary for explaining this behavior (Requirement 4).
Figure 9
Figure 9
The experimental setup and procedure including (A) the participant being introduced to the robot and the task by the experimenter, (B) the actual human-robot interaction supervised by technicians and, (C) a post-interaction survey taken in a separate room.

Similar articles

References

    1. Adadi A., Berrada M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160. 10.1109/ACCESS.2018.2870052 - DOI
    1. Adam C., Johal W., Pellier D., Fiorino H., Pesty S. (2016). Social human-robot interaction: a new cognitive and affective interaction-oriented architecture, in Social Robotics, eds Agah A., Cabibihan J. J., Howard A. M., Salichs M. A., He H. (Cham: Springer International Publishing; ), 253–263. 10.1007/978-3-319-47437-3_25 - DOI
    1. Anderson J. R., Bothell D., Byrne M. D., Douglass S., Lebiere C., Qin Y. (2004). An integrated theory of the mind. Psychol. Rev. 111, 1036–1060. 10.1037/0033-295X.111.4.1036 - DOI - PubMed
    1. Bartneck C., Kulić D., Croft E., Zoghbi S. (2009). Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int. J. Soc. Robot. 1, 71–81. 10.1007/s12369-008-0001-3 - DOI
    1. Baxter P. E., de Greeff J., Belpaeme T. (2013). Cognitive architecture for human-robot interaction: towards behavioural alignment. Biol. Inspir. Cogn. Architect. 6, 30–39. 10.1016/j.bica.2013.07.002 - DOI

LinkOut - more resources