More than Human Centred Design
Keeping Designers in the Loop: Communicating Inherent Algorithmic Trade-offs Across Multiple Objectives
Bowen Yu, Computer Science and Engineering, University of Minnesota, Minneapolis, Minnesota, United States
Ye Yuan, Department of Computer Science and Engineering, University of Minnesota, Minneapolis, Minnesota, United States
Loren Terveen, GroupLens, Department of Computer Science, University of Minnesota, Minneapolis, Minnesota, United States
Zhiwei Steven Wu, University of Minnesota - Twin Cities, Minneapolis, Minnesota, United States
Jodi Forlizzi, Human-Computer Interaction Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Haiyi Zhu, Human Computer Interaction Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
- Corresponding email(s): email@example.com
- Project webpage
- Research group webpage
- ACM DL Link: Associated Paper or Pictorial
Artificial intelligent algorithms have been widely used in products and services. One example of these products and services is service assisting human decision making in a high-stake context like recidivism prediction. However, these algorithms are complex and have trade-offs, often between prediction accuracy and fairness to population subgroups. Algorithmic trade-offs are critical: they impact the intended user experience, and sometimes even raise serious ethical concerns or result in societal-level consequences. We proposed a method to help designers and users understand the trade-offs in the algorithms and select algorithms with trade-offs that are consistent with their goals and needs. We demonstrated our method by designing and developing this interactive visualization tool in the context of predicting criminal defendants’ likelihood to reoffend.
Who is the target audience and why design for them? Designers working with AI products and potential users for AI products
What were the challenges or limitations encountered in this project? One of the challenges we encountered during this project was explaining algorithmic trade-offs in machine learning to users with non-technical backgrounds.
What are the opportunities and next steps for this project? For this project, we are working on creating an “authoring tool” that will let designers create their own visualizations for capturing trade-offs between accuracy and fairness measures for the algorithmic systems they are working with.
To the Demo Visitors: Please use the following Google form if you have any feedback on our prototype: https://forms.gle/3EYWvVipNj1ReP2n8.