The development of robots capable of interacting with humans has made tremendous progress in the last decade, leading to an expectation that in the near future, robots will be increasingly deployed in public spaces, for example as receptionists, shop assistants, waiters, or bartenders. In these scenarios, robots must necessarily deal with situations that require human-robot interactions (HRI) that are short and dynamic, and where the robot has to be able to deal with multiple persons at once.

To support this form of interaction, robots typically require specific skills, including robust video and audio processing, fast reasoning and decision making mechanisms, and natural and safe output path planning algorithms. This physically embodied, dynamic, real-world context is the most challenging possible domain for multimodal interaction. For example, the state of the physical environment may change at any time, the input sensors must deal with noisy and uncertain input, while the robot platform must combine interactive social behavior with physical task-based action, such as moving and grasping.

Most current social robots play the role of a companion, often in a long-term, one-on-one relationship with the user. In this context, the primary goal for the robot is to build a relationship with the user through social interaction; the robot is primarily an interactive partner and any task-based behavior is secondary to this overall goal. In contrast to that, robots in public spaces need to support a style of interaction which is distinctive in two main ways. First, while most existing social robots deal primarily with one-on-one interactive situations, robots in public spaces must deal with dynamic, multi-party scenarios; people constantly enter and leave the scene, so the robot must constantly choose appropriate social behavior while interacting with a series of new partners. Second, while existing social robotics projects generally take social interaction as the primary goal, robots in public spaces have to support social communication in the context of a cooperative, task-based interaction.

The HRI group at the Center for Human-Computer Interaction researched HRI with robots in public spaces within the IURO project (Interactive Urban Robot). The life-sized robot, which was developed in the course of the project, is able to navigate through densely populated inner-city environments and proactively engage in conversation with pedestrians. In contrast to comparable projects on social robots, the researchers in IURO took a step forward with inserting the robot into unrestricted public space. In contrast to restricted public environments such as museums, interaction in unrestricted environments has to manage several uncontrollable contextual conditions. The development of the IURO robot followed a user-centered design approach. Numerous studies were carried out over the entire project duration, in order to evaluate and validate the robot and its social behavior. In the requirements phase, it was found that the robot should be of anthropomorphic appearance and that the robot should approach people from right or left, but not frontally. To provide the human interaction partners of IURO with a positive user experience, the HRI group explored the social behavior of the robot. It was found that verbal feedback is most important in such a scenario, but facial expressions can provide additional empathy towards the robot and a screen a backup channel for reassurance.

Another project, in which personnel of the HRI group was involved, was the JAMES project (Joint Action for Multimodal Embodied Social Systems). The goal of JAMES was to develop a robot that supports socially appropriate, multi-party, multimodal interaction in a bartending scenario. One line of research in JAMES was the comparison of socially appropriate and purely task-based interaction. The JAMES project partners found that interactions with socially intelligent robots are more efficient. Another line of research in JAMES was the development and evaluation of methods from computational linguistics for processing spoken language in the context of HRI with robots in public spaces. For example, members of the HRI group applied ellipsis detection and word similarity computation to parse grammatically invalid spoken sentences. It was also evaluated whether automatic topic recognition can be applied to spoken language without further context information.

In the last years, the HRI group organized and participated in an ongoing series of workshops on the topic at different conferences. You can find more information about these workshops following the links below:

Research by