DIS2020

More than Human Centred Design

Demonstrations

Punishable AI

  • Beat Rossmy, LMU Munich, Munich, Germany
  • Sarah Theres Völkel, LMU Munich, Munich, Germany
  • Patrcia Kimm, LMU Munich, Munich, Germany
  • Alexander Wiethoff, LMU Munich, Munich, Germany
  • Elias Naphausen, University of Applied Sciences Augsburg, Augsburg, Germany
  • Andreas Muxel, University of Applied Sciences Augsburg, Augsburg, Germany
  • Corresponding email(s): beat.rossmy@ifi.lmu.de
  • Project webpage
  • ACM DL Link: Associated Paper or Pictorial

As intelligent systems enter our daily life, interaction paradigms have to be designed for non-expert users. For example, how can we give feedback to an autonomous car after a fatal crash? Furthermore, we can speculate if self-learning systems will have to take responsibility for their own actions. If so, how to design interactions which are meaningful for the system and the user? Looking back, the punishment of objects was practiced in ancient cultures, where objects that killed people (e.g. a falling mill stone) were no longer used or even banished. This approach seems naive but illustrates how interaction with “animated” things could look like. Nowadays, aggressive, violent, and abusive behavior against technologies can be observed. Thus, could punishment be an accepted measure to interact with robots or intelligent systems? Our demonstration puts you directly in this context and confronts you with this interaction. Would you scold a robot in order to train it? Would you dazzle it with a flashlight or would you even break its legs to convey your intent? This thought-provoking experiment does not advocate punishment as a desirable interaction but rather sparks a discussion about existing design strategies such as the ubiquitous anthropomorphic design of technologies.

Who is the target audience and why design for them? Our target audience are researchers, designers, and users. Currently it is important that we think about and discuss how interaction with trainable technologies should look like in the future. Design trends such as anthropomorphic design of technologies such as voice assistants, machines, and robots could easily lead to a future in which technology is treated human-like. But where positive human-like interactions are implemented there is room for negative interaction as well. If we can thank Siri for a well performed task, we can scold too even if it’s not intended by the developers. So what is our responsibility as designers and researchers? And what would we actually like to use as users? How should technology look like and how should we interact with it?

What were the challenges or limitations encountered in this project? We tried to combine a speculative design approach with a classical user study setup. A big challenge was to design a scenario in which the participants actually believed in the necessity of punishing a robot. Without their belief we wouldn't have a chance of observing their reactions and based on that conclude on their acceptance of punishment. The training of the robot seemed therefore to be a suitable scenario since most users most likely experienced “drill”-like learning situations before and thus could relate to being scolded as a feedback for bad performance. The other more extreme and violent feedback mechanisms are based on observable real world behavior. Dog-training sometimes uses water pistols to establish a disliked stimulus, which inspired dazzling with the flashlight. Insects and vermin are often killed for pragmatic reasons but are as well the target for (more or less socially accepted) sadistic interactions. This was a central inspiration for the form and shape of the robot in our experiment to balance in between socially rejected behavior and an extreme and meaningful interaction. However, the design and the instructions to punish the robot are limitations of the setup and study. Since users couldn’t choose freely which punishment they liked to apply but were instead restricted to a scripted procedure.

What are the opportunities and next steps for this project? Based on the previously described challenges, the logical next steps are to investigate the influence of the robot design on the user behavior. How would they perform with anthropomorphic or in contrast with abstract inanimate designs? What would happen if we would perform a similar experiment, but at home without participants being directly observed and thus influenced? Would users accept and get used to kicking their vacuum cleaning robot as a form of feedback? Would it be more desirable to commend correct behavior instead of using violence? Further the idea of a destructive, incremental, and limited punishment method keeps a huge potential to be a meaningful interaction for users. Our prior knowledge, social priming, and understanding of cause and effect implicitly gives this interaction meaning. Therefore, it is crucial to further investigate how e.g. such an interaction could be transferred into non-zoomorphic, non-anthropomorphic designs.

To the Demo Visitors: Based on previous discussions we know that this project polarizes. We initially submitted to the demo session (additionally to the full paper category) exactly for the reason to involve you. To give you the opportunity to experience the different punishment techniques. To experience that it is demanding to accept an lifeless object as intelligent so that it can process and understand words and their meaning. And at the same time it is demanding to remember that the robot is lifeless and only a thing so that breaking its legs is actually not different from breaking a stick. So we were most interested in your reaction and opinion. So please reach out to beat.rossmy@ifi.lmu.de if you want to get in touch or start a discussion. (We have as well 200 robot legs lying around so feel free to invite us after COVID-19 for a demo!) Please remember that we do not encourage the design of violent interactions in HCI. This project rather wants to achieve the opposite. Please think why you design something, reflect how it impacts the users, and always remember the responsibility we have in shaping future technologies. No robots were permanently harmed during the experiment.