{"id":505,"date":"2025-07-07T10:35:52","date_gmt":"2025-07-07T10:35:52","guid":{"rendered":"https:\/\/dis.acm.org\/2025\/?page_id=505"},"modified":"2025-07-07T10:48:11","modified_gmt":"2025-07-07T10:48:11","slug":"publications","status":"publish","type":"page","link":"https:\/\/dis.acm.org\/2025\/publications\/","title":{"rendered":"Publications"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">Paper Sessions<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"#Extended_Reality\" data-type=\"internal\" data-id=\"#Extended_Reality\">Extended Reality<\/a><\/li>\n\n\n\n<li><a href=\"#More-than-Humans\" data-type=\"internal\" data-id=\"#More-than-Humans\">More-than-Humans<\/a><\/li>\n\n\n\n<li><a href=\"#Social_Robots_and_Agents\" data-type=\"internal\" data-id=\"#Social_Robots_and_Agents\">Social Robots and Agents<\/a><\/li>\n\n\n\n<li><a href=\"#Speculative_Design_and_Futures\" data-type=\"internal\" data-id=\"#Speculative_Design_and_Futures\">Speculative Design and Futures<\/a><\/li>\n\n\n\n<li><a href=\"#Accessibility_and_Inclusive_Design\" data-type=\"internal\" data-id=\"#Accessibility_and_Inclusive_Design\">Accessibility and Inclusive Design<\/a><\/li>\n\n\n\n<li><a href=\"#Generative_AI_Tools_for_Learning\">Generative AI Tools for Learning<\/a><\/li>\n\n\n\n<li><a href=\"#Privacy_and_Security\" data-type=\"internal\" data-id=\"#Privacy_and_Security\">Privacy and Security<\/a><\/li>\n\n\n\n<li><a href=\"#Sound_and_Haptics\" data-type=\"internal\" data-id=\"#Sound_and_Haptics\">Sound and Haptics<\/a><\/li>\n\n\n\n<li><a href=\"#Ambient_Technologies\" data-type=\"internal\" data-id=\"#Ambient_Technologies\">Ambient Technologies<\/a><\/li>\n\n\n\n<li><a href=\"#Critical_Perspectives\" data-type=\"internal\" data-id=\"#Critical_Perspectives\">Critical Perspectives<\/a><\/li>\n\n\n\n<li><a href=\"#Designing_Generative_AI_Tools\" data-type=\"internal\" data-id=\"#Designing_Generative_AI_Tools\">Designing Generative AI Tools for Creative Work<\/a><\/li>\n\n\n\n<li><a href=\"#Studying_Generative_AI_Use\" data-type=\"internal\" data-id=\"#Studying_Generative_AI_Use\">Studying Generative AI Use in Creative Work<\/a><\/li>\n\n\n\n<li><a href=\"#Collaborative_and_Participatory_Design\" data-type=\"internal\" data-id=\"#Collaborative_and_Participatory_Design\">Collaborative and Participatory Design<\/a><\/li>\n\n\n\n<li><a href=\"#Narrative_and_Storytelling\" data-type=\"internal\" data-id=\"#Narrative_and_Storytelling\">Narrative and Storytelling<\/a><\/li>\n\n\n\n<li><a href=\"#Sustainability_and_Environmental_Awareness\" data-type=\"internal\" data-id=\"#Sustainability_and_Environmental_Awareness\">Sustainability and Environmental Awareness<\/a><\/li>\n\n\n\n<li><a href=\"#VR_and_AR\" data-type=\"internal\" data-id=\"#VR_and_AR\">VR and AR<\/a><\/li>\n\n\n\n<li><a href=\"#Community_and_Social_Design\" data-type=\"internal\" data-id=\"#Community_and_Social_Design\">Community and Social Design<\/a><\/li>\n\n\n\n<li><a href=\"#Customization_and_Personalization\" data-type=\"internal\" data-id=\"#Customization_and_Personalization\">Customization and Personalization<\/a><\/li>\n\n\n\n<li><a href=\"#Generative_AI_as_Design_Material\" data-type=\"internal\" data-id=\"#Generative_AI_as_Design_Material\">Generative AI as Design Material<\/a><\/li>\n\n\n\n<li><a href=\"#Visualization_and_Physicalization\" data-type=\"internal\" data-id=\"#Visualization_and_Physicalization\">Visualization and Physicalization<\/a><\/li>\n\n\n\n<li><a href=\"#Design_for_Specific_Contexts\" data-type=\"internal\" data-id=\"#Design_for_Specific_Contexts\">Design for Specific Contexts<\/a><\/li>\n\n\n\n<li><a href=\"#Reflection_and_Self-Awareness\" data-type=\"internal\" data-id=\"#Reflection_and_Self-Awareness\">Reflection and Self-Awareness<\/a><\/li>\n\n\n\n<li><a href=\"#Design_Methods_and_Processes\" data-type=\"internal\" data-id=\"#Design_Methods_and_Processes\">Design Methods and Processes<\/a><\/li>\n\n\n\n<li><a href=\"#Tangible_and_Material_Interfaces\" data-type=\"internal\" data-id=\"#Tangible_and_Material_Interfaces\">Tangible and Material Interfaces<\/a><\/li>\n\n\n\n<li><a href=\"#Critical_Materials\" data-type=\"internal\" data-id=\"#Critical_Materials\">Critical Materials &amp; Making<\/a><\/li>\n\n\n\n<li><a href=\"#Designing_for_Specific_User_Groups\" data-type=\"internal\" data-id=\"#Designing_for_Specific_User_Groups\">Designing for Specific User Groups<\/a><\/li>\n\n\n\n<li><a href=\"#Multi-Modal\" data-type=\"internal\" data-id=\"#Multi-Modal\">Multi-Modal Interaction Design with Generative AI<\/a><\/li>\n\n\n\n<li><a href=\"#Personal_Informatics\" data-type=\"internal\" data-id=\"#Personal_Informatics\">Personal Informatics<\/a><\/li>\n\n\n\n<li><a href=\"#Embodied_Interaction\" data-type=\"internal\" data-id=\"#Embodied_Interaction\">Embodied Interaction<\/a><\/li>\n\n\n\n<li><a href=\"#Ethics_and_Values\" data-type=\"internal\" data-id=\"#Ethics_and_Values\">Ethics and Values<\/a><\/li>\n\n\n\n<li><a href=\"#Health_Wellbeing_1\" data-type=\"internal\" data-id=\"#Health_Wellbeing_1\">Health and Wellbeing 1<\/a><\/li>\n\n\n\n<li><a href=\"#Health_and_Wellbeing_2\" data-type=\"internal\" data-id=\"#Health_and_Wellbeing_2\">Health and Wellbeing 2<\/a><\/li>\n\n\n\n<li><a href=\"#User_Experience_in_Specific_Contexts\" data-type=\"internal\" data-id=\"#User_Experience_in_Specific_Contexts\">User Experience in Specific Contexts<\/a><\/li>\n\n\n\n<li><a href=\"#Cultural_Heritage\" data-type=\"internal\" data-id=\"#Cultural_Heritage\">Cultural Heritage<\/a><\/li>\n\n\n\n<li><a href=\"#Playfulness_and_Engagement\">Playfulness and Engagement<\/a><\/li>\n\n\n\n<li><a href=\"#Work_and_Productivity\">Work and Productivity<\/a><\/li>\n<\/ul>\n\n\n\n<div id=\"DLtoc\">\n         <div id=\"DLheader\">\n            <h1>DIS &#8217;25: Proceedings of the 2025 ACM Designing Interactive Systems Conference<\/h1><a class=\"DLcitLink\" title=\"Go to the ACM Digital Library for additional information about this proceeding\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/proceedings\/10.1145\/3715336\"><img decoding=\"async\" class=\"DLlogo\" alt=\"Digital Library logo\" height=\"30\" src=\"https:\/\/dl.acm.org\/specs\/products\/acm\/releasedAssets\/images\/footer-logo1.png\">\n               Full Citation in the ACM Digital Library\n               <\/a><\/div>\n         <div id=\"DLcontent\">\n            <h2><a id = \"Extended_Reality\">SESSION: Extended Reality<\/a><\/h2>\n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735424\">Sensing Nature<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Tianyuan Zhang<\/li>\n               <li class=\"nameList\">Wei Lin<\/li>\n               <li class=\"nameList\">Dingye Zhang<\/li>\n               <li class=\"nameList\">Xueni Pan<\/li>\n               <li class=\"nameList\">William Latham<\/li>\n               <li class=\"nameList\">Katie Grayson<\/li>\n               <li class=\"nameList\">Zillah Watson<\/li>\n               <li class=\"nameList Last\">Marco Fyfe Pietro Gillies<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>The rise of urbanisation has reduced connection with nature and physical interaction,\n                     both crucial for well-being and pro-environmental behavior. Sensing Nature is a multisensory\n                     virtual reality installation that aims to energize and nourish the spirit by reimagining\n                     the natural world as a playful, immersive experience. Users create a sensory journey\n                     by interacting with a haptic tree and exploring real-world fabrics and textures. Each\n                     touch triggers a transformation of the virtual tree, blending blooming virtual flowers,\n                     nature-inspired spatial sounds, and physical vibrations for a unique immersive experience.\n                     This project investigated multisensory interaction in virtual reality\u2019 s impacts on\n                     people&#8217;s feelings and attitude to nature, addressing the lack of direct touch-based\n                     haptic interactions in VR by incorporating active and passive haptic feedback. Qualitative\n                     studies shown that multisensory interactions in virtual reality induce healing effects,\n                     relaxation and shift attitudes towards nature, demonstrating sensing nature&#8217;s potential\n                     application for relaxation and pro- environmental attitudes.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735677\">RestfulRaycast: Exploring Ergonomic Rigging and Joint Amplification for Precise Hand\n                  Ray Selection in XR<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Hongyu Mao<\/li>\n               <li class=\"nameList\">Mar Gonzalez-Franco<\/li>\n               <li class=\"nameList\">Vrushank Phadnis<\/li>\n               <li class=\"nameList\">Eric J Gonzalez<\/li>\n               <li class=\"nameList Last\">Ishan Chatterjee<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Hand raycasting is widely used in extended reality (XR) for selection and interaction,\n                     but prolonged use can lead to arm fatigue (e.g., &#8220;gorilla arm&#8221;). Traditional techniques\n                     often require a large range of motion where the arm is extended and unsupported, exacerbating\n                     this issue. In this paper, we explore hand raycast techniques aimed at reducing arm\n                     fatigue, while minimizing impact to precision selection. In particular, we present\n                     Joint-Amplified Raycasting (JAR) \u2013 a technique which scales and combines the orientations\n                     of multiple joints in the arm to enable more ergonomic raycasting. Through a comparative\n                     evaluation with the commonly used industry standard Shoulder-Palm Raycast (SP) and two other ergonomic alternatives\u2014Offset Shoulder-Palm\n                     Raycast (OSP) and Wrist-Palm Raycast (WP)\u2014we demonstrate that JAR results in higher\n                     selection throughput and reduced fatigue. A follow-up study highlights the effects\n                     of different JAR joint gains on target selection and shows users prefer JAR over SP\n                     in a representative UI task.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735769\">GesPrompt: Leveraging Co-Speech Gestures to Augment LLM-Based Interaction in Virtual\n                  Reality<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Xiyun Hu<\/li>\n               <li class=\"nameList\">Dizhi Ma<\/li>\n               <li class=\"nameList\">Fengming He<\/li>\n               <li class=\"nameList\">Zhengzhe Zhu<\/li>\n               <li class=\"nameList\">Shao-Kang Hsia<\/li>\n               <li class=\"nameList\">Chenfei Zhu<\/li>\n               <li class=\"nameList\">Ziyi Liu<\/li>\n               <li class=\"nameList Last\">Karthik Ramani<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Large Language Model (LLM)-based copilots have shown great potential in Extended Reality\n                     (XR) applications. However, the user faces challenges when describing the 3D environments\n                     to the copilots due to the complexity of conveying spatial-temporal information through\n                     text or speech alone. To address this, we introduce GesPrompt, a multimodal XR interface\n                     that combines co-speech gestures with speech, allowing end-users to communicate more\n                     naturally and accurately with LLM-based copilots in XR environments. By incorporating\n                     gestures, GesPrompt&nbsp; extracts spatial-temporal reference from co-speech gestures,\n                     reducing the need for precise textual prompts and minimizing cognitive load for end-users.\n                     Our contributions include (1) a workflow to integrate gesture and speech input in\n                     the XR environment, (2) a prototype VR system that implements the workflow, and (3)\n                     a user study demonstrating its effectiveness in improving user communication in VR\n                     environments.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            <h2><a id = \"More-than-Humans\">SESSION: More-than-Humans<\/a><\/h2>\n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735426\">Critter Connect, wearable design for place-based &amp; multisensory species encounters.<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Mathilde Gouin<\/li>\n               <li class=\"nameList\">Nuno Jardim Nunes<\/li>\n               <li class=\"nameList Last\">Valentina Nisi<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>This study presents <em>Critter Connect Critter Connect<\/em>, a wearable device fostering, multispecies relationships in natural ecosystems. Grounded\n                     in posthuman theory and More-than-Human geography, the work responds to human-centred\n                     design limitations, which often overlook non-visual and non-linguistic modes of interaction.\n                     It also highlights the need for practical tools fostering direct, place-specific,\n                     and non-hierarchical sensory-rich engagements with other beings. This pictorial shows\n                     the device&#8217;s potential to enable spontaneous and embodied interactions between users\n                     and three species in a biodiversity-rich ecosystem through geolocation-based tactile\n                     and auditory feedback. We present a design process building on multispecies ethics\n                     and speculative methods to address ecological care, as well as a pilot study demonstrating\n                     Critter Connect&#8217;s capacity to amplify the wearer&#8217;s awareness of unseen multispecies\n                     presences and sense of connection to nature. This research contributes to HCI by offering\n                     a framework for designing ethically considerate, sensory-rich interactions with other\n                     beings, thus challenging human-centric engagement and promoting ecological cohabitation.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735431\">Knitting with unknown trees: assembling a more-than-human practice<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Doenja Oogjes<\/li>\n               <li class=\"nameList\">Ege K\u00f6kel<\/li>\n               <li class=\"nameList\">Netta Ofer<\/li>\n               <li class=\"nameList\">Hsiang-Lin Kuo<\/li>\n               <li class=\"nameList\">Jasmijn Vugts<\/li>\n               <li class=\"nameList\">Troy Nachtigall<\/li>\n               <li class=\"nameList Last\">Torin Hopkins<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>In this pictorial, we explore alternative ways of knowing urban trees through a more-than-human\n                     lens. Using a municipal tree dataset, we focus on \u201cunknown\u201d trees\u2014entries unclassified\n                     due to error, decay, or absence\u2014highlighting the limits of quantification and fixed\n                     knowledge systems. Urban trees, while critical for ecosystems, are often shaped by\n                     technological interventions (e.g., GIS, IoT sensors, AI diagnostics) that prioritize\n                     their utility over other expressions. We engage in knitting as a material inquiry\n                     to foreground nonhuman agencies and relational entanglements. Through reflective shifts\n                     and compromises, this project questions normative design practices, seeking to amplify\n                     nonhuman participation. We make two contributions. Firstly, we offer insights into\n                     fostering alternative, relational engagements with urban ecologies. Secondly, we reflect\n                     on our process of surfacing and working with agentic capacities, articulating guidance\n                     for other design researchers. Through this, we advocate for fragmented approaches\n                     that embrace complicity and complexity in more-than-human design.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735404\">Diffractive Interfaces: Facilitating Agential Cuts in Forest Data Across More-than-human\n                  Scales<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Elisa Giaccardi<\/li>\n               <li class=\"nameList\">Seowoo Nam<\/li>\n               <li class=\"nameList Last\">Iohanna Nicenboim<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>As cities worldwide adopt data-driven approaches to optimize urban forests, computational\n                     tools like agent-based models (ABMs) are increasingly popular to simulate forest growth\n                     and inform planting decisions. However, ABMs often focus on individual metrics, neglecting\n                     forests as interdependent ecosystems. Rooted in anthropocentric ideals, these models\n                     risk reducing forests to infrastructures for human benefit, undermining their long-term\n                     resilience. This pictorial challenges these limitations by exploring how interface\n                     design can transcend reductive, agent-centric representations to foster relational\n                     understandings of forest ecosystems as more-than-human bodies. Drawing on feminist\n                     theorist Karen Barad&#8217;s concepts of \u201cdiffraction\u201d and \u201cagential cuts,\u201d we craft a repertoire\n                     of diffractive interfaces that engage with forest simulation data, revealing how more-than-human\n                     bodies can be encountered across diverse temporal, spatial, and agential scales. Through\n                     this design exploration, we operationalize more-than- human perspectives in data practices,\n                     deepening our understanding of the performative dimensions of interfaces and advancing\n                     nuanced, practical approaches to more-than-human design.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735415\">Show Me Your More-Than-Human<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList Last\">Arne Berger<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>This series of photographic vignettes shows multiple instances of more-than-human,\n                     mapping out the diversity of approaches in this emergent field. More-than-human means\n                     embracing entangled, relational agencies and emphasizing pluralistic, situated, and\n                     non-anthropocentric ways of being in the world. Using a method of walking interviews,\n                     I invited researchers from diverse contexts to show me their more-than-human. This\n                     pictorial contributes a deliberately diverse inventory of encountering frontiers and\n                     boundaries; entities that are hidden, forbidden, spark curiosity; noticing kinship\n                     and hybrids: For becoming together in relational entanglements, to spark reflection\n                     and debate on what \u00bbmore-than-human design\u00ab may be.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            \n            \n            <h2><a id = \"Social_Robots_and_Agents\">SESSION: Social Robots and Agents<\/a><\/h2>\n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735429\">The Art of Mechamimicry: Designing Prototyping Tools for Human-Robot Interaction<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">James L Dwyer<\/li>\n               <li class=\"nameList\">Stine S Johansen<\/li>\n               <li class=\"nameList\">Markus Rittenbruch<\/li>\n               <li class=\"nameList\">Jared W Donovan<\/li>\n               <li class=\"nameList Last\">Rafael Gomez<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>This research investigates the application of tangible and embodied prototyping methods\n                     integrated with virtual simulation in Human-Robot Interaction (HRI). We present the\n                     development of the \u201ckinematic puppet,\u201d a reliable, reusable, adaptable, and accessible prototyping tool designed to facilitate stakeholder engagement in early-stage HRI\n                     research and development without requiring significant financial or time investments.\n                     The potential of this methodological approach is illustrated through a formative co-design\n                     workshop in Robotic Assisted Surgery (RAS), where the kinematic puppet, simple props\n                     and a low-fidelity anatomical model enabled stakeholders to externalise tacit knowledge\n                     through role-play scenarios. The case study suggests that combining physical and virtual\n                     approaches can support stakeholders in expressing concrete ideas for improving or\n                     changing the interaction, making abstract concepts tangible, with virtual simulation\n                     enabling rich data capture for further design development. This work contributes to\n                     the rapidly expanding toolbox of design approaches in HRI.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735765\">Impact of Affirmative and Negating Robot Gestures on Perceived Personality, Role,\n                  and Contribution of a Human Group Member<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Tuan Vu Pham<\/li>\n               <li class=\"nameList\">Thomas H. Weisswange<\/li>\n               <li class=\"nameList Last\">Marc Hassenzahl<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Robots can play a role in mediating human group interactions. This study examines\n                     how robot gestures affect the perception of a human group member\u2019s personality, role\n                     in the group, and contribution. In a vignette study (n=96), participants imagined\n                     being in a group discussion and watched a short video of another group member presenting\n                     an argument. In one condition (affirmative gesture), a robot nodded while the member\n                     spoke; in the other, it shook its head (negating gesture). A control condition featured\n                     no robot. The affirmative gesture enhanced perceptions of the speaker\u2019s personality\n                     and role in the group, though their contribution was not affected. The negating gesture\n                     showed no adverse effects. Additionally, participants perceived the robot as a group\n                     member when it nodded but as an onlooker when it shook its head. This suggests that\n                     positive robot gestures can improve group dynamics by fostering favorable interpersonal\n                     perceptions.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            <h2><a id =\"Speculative_Design_and_Futures\">SESSION: Speculative Design and Futures<\/a><\/h2>\n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735405\">The Image of the Metaverse: A Plurality of Narratives for Immersive Realities<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Jihae Han<\/li>\n               <li class=\"nameList Last\">Jeffrey V Nickerson<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>ABSTRACT<\/p>\n                  \n                  <p>This pictorial explores the challenges and opportunities of creating meaningful virtual\n                     spaces in the metaverse. Drawing inspiration from Kevin Lynch&#8217;s principles of urban\n                     imageability, the authors present a series of narrative explorations and associated\n                     graphics that reimagine how space, time, and social interaction might function in\n                     virtual environments. The work identifies key differences between physical and virtual\n                     architectures, including perceptual fungibility, non-linear spatial relationships,\n                     and collective emergence. Through detailed narratives organized around themes of arrival,\n                     boundary, navigation, connection, and memory, the pictorial proposes new organizing\n                     principles for metaverse design that embrace discontinuity, fluid boundaries, and\n                     social physics rather than attempting to replicate physical space. The work contributes\n                     theoretical frameworks and methodological insights for developing more imageable,\n                     engaging, and coherent virtual worlds.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735434\">LO: A Speculative Domestic Technology That Lives and Dies Along with Its User<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">JiYeon Lee<\/li>\n               <li class=\"nameList\">Chang-Min Kim<\/li>\n               <li class=\"nameList\">Jisu Park<\/li>\n               <li class=\"nameList\">Hyungjun Cho<\/li>\n               <li class=\"nameList Last\">Tek-Jin Nam<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>To expand the discourse on non-utilitarian values in HCI and Design, this pictorial\n                     explores the concept of Life-Synchronized Products (LSPs), domestic technologies that\n                     align lifecycles with users. Through a co-design workshop, we developed property dimensions\n                     for products whose lifetime is associated with users. We defined LSP from properties\n                     in the dimension and exemplify LSP through \u2018LO (Life-synchronized Oven),\u2019 an oven\n                     that lives along with its user and ceases to exist upon death. LO features a visual\n                     interface, Thread of Life, reflecting lifetime and meaningful events, demonstrating\n                     physical and intelligent growth, expressing thoughts and emotions, and leaving no\n                     trace after termination. Expert interviews with our research artifacts including a\n                     semi-functional prototype, service website, and design fiction video revealed that\n                     LO can transcend traditional utilitarian roles to become \u201clife companions\u201d that foster\n                     existential reflection. We discuss the technical, business, and socio-ethical challenges\n                     of LSPs and implications for HCI and design research.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735444\">A tidalectic reading of landscapes: Multispecies peripatetic ethnography as a method\n                  for knowing landscapes.<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Katerina Inglezaki<\/li>\n               <li class=\"nameList\">Mariana Pestana<\/li>\n               <li class=\"nameList Last\">Nuno Jardim Nunes<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>This study explores multispecies interactions and human-nonhuman synergies in intertidal\n                     zones through a novel autoethnographic approach. Multispecies ethnography and posthumanist\n                     thinking challenge human-centered perspectives, highlighting the need to embrace diverse\n                     temporalities and ways of knowing in ecological research. However, current methods\n                     often fail to adequately capture these complex interrelations and the lived experiences\n                     within such environments. Shifting rhythmically between land and water, we use the\n                     intertidal contact zone to unveil the delicate synergy between humans and animals\n                     that populate the salt marsh of the Tagus delta. Our findings underscore the potential\n                     of field-based, participatory methods to model multispecies interactions and experimental\n                     drawing methods with a spatiotemporal structure that allows thinking beyond the traditional\n                     representational techniques of landscape architecture &#8211; what we call tidalectic portraits.\n                     While our work does not offer immediate solutions to ecological crises, it emphasizes\n                     the importance of slowing down and engaging with landscape rhythms to cultivate embodied,\n                     situated knowledge.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735781\">Designing Multisensory Biophilic Futures: Exploring the Potential of Interaction Design\n                  to Deepen Human Connections With Nature in Indoor Environments<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Shruti Rao<\/li>\n               <li class=\"nameList\">Judith Good<\/li>\n               <li class=\"nameList Last\">Hamed Alavi<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Advances in interaction design, architecture, and artificial intelligence offer new\n                     possibilities for built environments. Yet, most systems focus on improving physical\n                     parameters such as indoor air quality. While these enhance physical comfort, they\n                     often overlook an innate aspect of human experience\u2014our connection with nature\u2014fundamental\n                     to physical and mental health. In contrast, architecture offers a rich legacy of biophilic\n                     design that creates sensory-rich spaces evoking a connection to nature. What insights\n                     can biophilic architecture offer to guide interactive experiences in future buildings?\n                     Drawing on 13 expert interviews, we expose the gap between current biophilic practices\n                     in smart buildings and the multidimensional potential of nature-inspired design. We\n                     present eight themes reflecting expert imaginaries of biophilic futures and five design\n                     opportunities illustrating how emerging technologies can position biophilic interaction\n                     as multi-sensory, interpretive, and aligned with more-than-human, justice-oriented\n                     futures.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            <h2><a id =\"Accessibility_and_Inclusive_Design\">SESSION: Accessibility and Inclusive Design<\/a><\/h2>\n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735782\">Understanding the Accessibility of Single-User Virtual Reality Environments for Adults\n                  with Intellectual and Developmental Disabilities<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Piriyankan Kirupaharan<\/li>\n               <li class=\"nameList\">Tina-Marie Ranalli<\/li>\n               <li class=\"nameList Last\">Krishna Venkatasubramanian<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>In this paper, we aim to understand accessibility issues for people with intellectual\n                     and developmental disabilities (I\/DD) with single-user VR applications. To this end,\n                     we recruited eight participants with I\/DD for this study. We asked each participant\n                     to use a single-user VR application (on Meta Quest 2) and then conducted semi-structured\n                     interviews about their experiences. A subsequent thematic analysis of our interviews\n                     resulted in identifying several accessibility problems in using VR for people with\n                     I\/DD. Overall, we found that participants had difficulty: perceiving (including comprehending)\n                     the various elements of the virtual environment and using physical controllers to\n                     engage with (i.e., act within) the virtual environment. The participants then suggested\n                     potential improvements to make the virtual environments more accessible. Based on\n                     these findings, we call for further research in four broad areas to foster an accessible\n                     VR experience for people with I\/DD.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735685\">Describe Now: User-Driven Audio Description for Blind and Low Vision Individuals<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Maryam Cheema<\/li>\n               <li class=\"nameList\">Hasti Seifi<\/li>\n               <li class=\"nameList Last\">Pooyan Fazli<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Audio descriptions (AD) make videos accessible for blind and low vision (BLV) users\n                     by describing visual elements that cannot be understood from the main audio track.\n                     AD created by professionals or novice describers is time-consuming and offers little\n                     customization or control to BLV viewers on description length and content and when\n                     they receive it. To address this gap, we explore user-driven AI-generated descriptions,\n                     enabling BLV viewers to control both the timing and level of detail of the descriptions\n                     they receive. In a study, 20 BLV participants activated audio descriptions for seven\n                     different video genres with two levels of detail: concise and detailed. Our findings\n                     reveal differences in the preferred frequency and level of detail of ADs for different\n                     videos, participants\u2019 sense of control with this style of AD delivery, and its limitations.\n                     We discuss the implications of these findings for the development of future AD tools\n                     for BLV users.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            <h2><a id = \"Generative_AI_Tools_for_Learning\">SESSION: Generative AI Tools for Learning<\/a><\/h2>\n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735406\">The Second Organ Era: Exploring Human-AI Relationship Through Interactive Narrative<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Yanru Qian<\/li>\n               <li class=\"nameList\">Ching Wen Lee<\/li>\n               <li class=\"nameList Last\">Adorey Shen<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>AI&#8217;s automated information processing capabilities have started replacing human cognitive\n                     thinking in ways that are increasingly imperceptible. We aim to prompt a discussion\n                     on the potential loss of human autonomy and the perceptual influences of AI interventions\n                     in interpersonal communication by constructing a fictional and interactive narrative.\n                     The second organ era envisions a future where a series of AI-powered wearables (respectively,\n                     second eye, second ear, and second mouth) become an extension of the human sensory\n                     system, mediating how we acquire, interpret, and transmit information. By employing\n                     critical making as a reflective design practice, we materialize inquiries into how\n                     human-AI relationships should evolve and call for critical evaluation of our growing\n                     reliance on AI.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735673\">Prompt Machine: A Tangible Generative AI Tool for Supporting Children&#8217;s Learning and\n                  Literacy<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Martin Lindrup<\/li>\n               <li class=\"nameList\">Rune M\u00f8berg Jacobsen<\/li>\n               <li class=\"nameList\">Joel Wester<\/li>\n               <li class=\"nameList\">Niels van Berkel<\/li>\n               <li class=\"nameList\">Dimitrios Raptis<\/li>\n               <li class=\"nameList Last\">Peter Axel Nielsen<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Generative AI technologies are moving into school settings. However, there is confusion\n                     about how, when, and why these technologies should be used. Our aim is to provide\n                     insights on how AI technology can be meaningfully integrated into schools, with a\n                     specific focus on secondary school education. Informed by ten teachers, we developed\n                     Prompt Machine, a tangible learning tool that serves three central purposes; 1) scaffold\n                     curriculum learning, 2) support development of AI literacy, and 3) act as a focal\n                     point among pupils and teachers for discussing possibilities and limitations of AI.\n                     Based on a study with 33 pupils and their teachers, we present findings on tangible\n                     and collaborative AI interactions, facilitation of AI, and integration of AI into\n                     curricula. Additionally, we reflect on challenges and opportunities for AI in education\n                     from the perspective of teachers and learners and discuss future steps for tangible\n                     AI.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735789\">PersonaFlow: Designing LLM-Simulated Expert Perspectives for Enhanced Research Ideation<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Yiren Liu<\/li>\n               <li class=\"nameList\">Pranav Sharma<\/li>\n               <li class=\"nameList\">Mehul Oswal<\/li>\n               <li class=\"nameList\">Haijun Xia<\/li>\n               <li class=\"nameList Last\">Yun Huang<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Generating interdisciplinary research ideas requires diverse domain expertise, but\n                     access to timely feedback is often limited by the availability of experts. In this\n                     paper, we introduce <em>PersonaFlow<\/em>, a novel system designed to provide multiple perspectives by using LLMs to simulate\n                     domain-specific experts. Our user studies showed that the new design 1) increased\n                     the perceived relevance and creativity of ideated research directions, and 2) promoted\n                     users\u2019 critical thinking activities (e.g., <em>interpretation<\/em>, <em>analysis<\/em>, <em>evaluation<\/em>, <em>inference<\/em>, and <em>self-regulation<\/em>), without increasing their perceived cognitive load. Moreover, users\u2019 ability to\n                     customize expert profiles significantly improved their sense of agency, which can\n                     potentially mitigate their over-reliance on AI. This work contributes to the design\n                     of intelligent systems that augment creativity and collaboration, and provides design\n                     implications of using customizable AI-simulated personas in domains within and beyond\n                     research ideation.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735762\">ClassComet: Exploring and Designing AI-generated Danmaku in Educational Videos to\n                  Enhance Online Learning<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Zipeng Ji<\/li>\n               <li class=\"nameList\">Pengcheng An<\/li>\n               <li class=\"nameList Last\">Jian Zhao<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Danmaku, users\u2019 live comments synchronized with, and overlaying on videos, has recently\n                     shown potential in promoting online video-based learning. However, user-generated\n                     danmaku can be scarce\u2014especially in newer or less viewed videos\u2014and its quality is\n                     unpredictable, limiting its educational impact. This paper explores how large multimodal\n                     models (LMM) can be leveraged to automatically generate effective, high-quality danmaku.\n                     We first conducted a formative study to identify the desirable characteristics of\n                     content- and emotion-related danmaku in educational videos. Based on the obtained\n                     insights, we developed ClassComet, an educational video platform with novel LMM-driven\n                     techniques for generating relevant types of danmaku to enhance video-based learning.\n                     Through user studies, we examined the quality of generated danmaku and their influence\n                     on learning experiences. The results indicate that our generated danmaku is comparable\n                     to human-created ones, and videos with both content- and emotion-related danmaku showed\n                     significant improvement in viewers\u2019 engagement and learning outcome.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735786\">ELLMA-T: an Embodied LLM-agent for Supporting English Language Learning in Social\n                  VR<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Mengxu Pan<\/li>\n               <li class=\"nameList\">Alexandra Kitson<\/li>\n               <li class=\"nameList\">Hongyu Wan<\/li>\n               <li class=\"nameList Last\">Mirjana Prpa<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Many people struggle with learning a new language when moving to a new country, with\n                     traditional tools falling short in providing contextualized learning tailored to each\n                     learner\u2019s needs. The recent development of large language models (LLMs) and embodied\n                     conversational agents (ECAs) in social virtual reality (VR) provides new opportunities\n                     to practice language learning in a contextualized and naturalistic way that takes\n                     into account the learner\u2019s language level and needs. To explore this opportunity,\n                     we developed ELLMA-T, a design probe that integrates an LLM (GPT-4) with an ECA for\n                     English language learning in social VR (VRChat), informed by the situated learning\n                     framework. We conducted a feasibility study to explore the potential and challenges\n                     of LLM-based ECAs for language learning in social VR. Drawing on qualitative interviews\n                     (N=12), we reveal the potential of ELLMA-T to generate realistic, believable, and\n                     context-specific role plays for agent-learner interaction in VR, and LLM\u2019s capability\n                     to provide initial language assessment and continuous feedback to learners. We provide\n                     four design implications for the future development of LLM-based language agents in\n                     social VR.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            <h2><a id=\"Privacy_and_Security\">SESSION: Privacy and Security<\/a><\/h2>\n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735435\">Teenagers and the Data Economy: Understanding Their Dreams, Desires and Anxieties\n                  with Metaphor Workbooks<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Samuel Barnett<\/li>\n               <li class=\"nameList\">William Odom<\/li>\n               <li class=\"nameList Last\">Samien Shamsher<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>Teenagers are in a unique position, having known no other reality than the current\n                     exploitative model of the data economy, and are particularly at risk of harm from\n                     it. Using a classroom intervention with 31 Grade 9 students, we deployed co-created\n                     Metaphor Workbooks as a tool to foster critical and reflexive thinking about their\n                     phones and data. Our research advances the HCI community&#8217;s understanding of teenagers\u2019\n                     entanglements with the data economy, by highlighting how they experience it through\n                     their critical, reflective, and creative responses. This alludes to ways in which\n                     future initiatives could better support teenagers in developing a critical relationship\n                     with data. We identify key gaps in their understanding of the data economy and emphasize\n                     the need for critical data literacy interventions to address their limited understanding,\n                     complex emotional relationships with their phones, and the pervasive influence of\n                     technology addiction narratives.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735411\">Wall, An Eccentric Design Probe: Exploring and Exposing the Sense-and-Extract Paradigm<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">James Pierce<\/li>\n               <li class=\"nameList\">Gabrielle Queen<\/li>\n               <li class=\"nameList Last\">Kristin Tapang<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>We describe our process of conceptualizing, designing, prototyping, and using Wall,\n                     an eccentric smart device intended to encourage reflection on the design qualities\n                     of surveillant sensing systems. As a result of our conceptual design process, we first\n                     summarize four design qualities that define a sense-and-extract interaction paradigm.\n                     We then document our process of developing Wall. Finally, we reflect on our experiences\n                     using and living with Wall, and the experiential and theoretical insights we gained\n                     through our self-use studies.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735427\">Arca: Documenting Novel Design Patterns for Improving Interpersonal Privacy with Smart\n                  Cameras<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">James Pierce<\/li>\n               <li class=\"nameList\">Claire Florence Weizenegger<\/li>\n               <li class=\"nameList\">Robyn Anderson<\/li>\n               <li class=\"nameList\">Hope Terpilowski<\/li>\n               <li class=\"nameList\">Wyatt Olson<\/li>\n               <li class=\"nameList\">Lian Bensadon<\/li>\n               <li class=\"nameList\">Faith Ong<\/li>\n               <li class=\"nameList Last\">Cobi Stancik<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>Companies and smart product designers prioritize the needs of primary users. However,\n                     smart devices with spatial sensors, like cameras and microphones, impact the experiences\n                     of people nearby. We refer to these people as adjacent users. We present a research\n                     through design project that highlights an overlooked need to design for adjacent users.\n                     This research outlines a range of actionable pathways forward in the form of design\n                     patterns, principles, and novel problem-framings. We also reflect upon barriers and\n                     inherent limits to user-centered design approaches. Our vehicle for generating and\n                     communicating these insights is a design concept and prototype called Arca. Presented\n                     as a fictional product, we brand Arca as \u201cmore inclusive, privacy-enhancing smart\n                     camera for you and your extended household.\u201d<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            <h2><a id =\"Sound_and_Haptics\">SESSION: Sound and Haptics<\/a><\/h2>\n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735421\">\u201cLet&#8217;s Jump into More Creative Avatars and Take this Brainstorm to the Flying Platform:\u201d\n                  Playful Prototypes of VR Meeting Support Tools<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Anya Osborne<\/li>\n               <li class=\"nameList\">Joshua McVeigh-Schultz<\/li>\n               <li class=\"nameList\">Alexandra Leeds<\/li>\n               <li class=\"nameList\">George Butler<\/li>\n               <li class=\"nameList\">Samir Ghosh<\/li>\n               <li class=\"nameList Last\">Katherine Isbister<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>Most videoconferencing technologies presently struggle to keep people engaged during\n                     team meetings. Recent research points to the potential of social virtual and extended\n                     reality (VR\/XR) technologies to transform remote meetings, offering innovative approaches\n                     through the design of novel meeting tools. This pictorial presents the design of five\n                     such prototypes of VR meeting support tools: conversation visualization, embodied\n                     affinity signaling, tools for avatar and space modulation, and time management. Deployed\n                     in a custom-built Mozilla Hubs environment, this toolkit was tested with five expert\n                     user teams in a two-fold study: researcher-moderated VR workshops (<em>N<\/em>=28) and unmoderated VR meetings (<em>N<\/em>=40). In this pictorial, the focus is on the design of the prototypes, with highlights\n                     of select results from participants\u2019 feedback and discussion of the perceived merit\n                     of the prototypes\u2019 design in supporting meeting interactions.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735408\">Sound-O-Matic: A tool for designing and implementing 3D soundscapes<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Jonas Oxenb\u00f8ll Petersen<\/li>\n               <li class=\"nameList Last\">Kim Halskov<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>The last two decades have witnessed a growing significance of soundscape design as\n                     a core topic in Interaction Design. Sound-O-Matic is an innovative tool that facilitates\n                     the design of real-time three-dimensional soundscapes. Diverging from conventional\n                     track-based audio tools, Sound-O-Matic is constructed atop Unity, a robust 3D game\n                     engine, thereby empowering designers to intricately address the temporal and spatial\n                     dynamics inherent in soundscape design. This study showcases three enduring and one\n                     transitory soundscape instances, seamlessly integrated into diverse settings: 1) a\n                     greenhouse, 2) a bunker, 3) a playground, and 4) a passenger train. These case studies\n                     illustrate how Sound-O-Matic adeptly manages a spectrum of design considerations encompassing\n                     the spatial configuration of speakers, the temporal dynamics inherent in the soundscape,\n                     interaction, and user experience. The discussion compares the four cases, highlights\n                     the diversity of the designs, and concludes with a brief discussion of potential further\n                     development of the tool.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735775\">SONARIOS: A Design Futuring-Driven Exploration of Acoustophoresis<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Ceylan Be\u015fevli<\/li>\n               <li class=\"nameList\">Lei Gao<\/li>\n               <li class=\"nameList\">Narsimlu Kemsaram<\/li>\n               <li class=\"nameList\">Giada Brianza<\/li>\n               <li class=\"nameList\">Orestis Georgiou<\/li>\n               <li class=\"nameList\">Sriram Subramanian<\/li>\n               <li class=\"nameList Last\">Marianna Obrist<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Sound waves shape not only our ability to hear but also offer new ways to interact\n                     with and experience our environment. Acoustophoresis, a method of manipulating objects\n                     using the mechanical energy of sound, enables multimodal displays incorporating touch,\n                     taste, vision, and more. While current research has focused on technical advancements,\n                     we explore acoustophoresis and its applications through a design-driven lens. We conducted\n                     a design futuring workshop, speculating on possible applications, and identified six\n                     key themes. Based on these insights, we developed two speculative scenarios, SONARIOS,\n                     that illustrate future experiences shaped by acoustophoresis. We abstracted the insights\n                     into three strong design concepts that bridge speculative exploration with practical\n                     design. We emphasize the importance of balancing desirability, feasibility, and responsibility\n                     in acoustophoresis development, advocating for a design approach that integrates technical\n                     innovation with user-centred considerations.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735797\">It Sounds Squishy: Understanding Cross-Modal Correspondences of Deformable Shapes\n                  and Sounds<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Maisie Palmer<\/li>\n               <li class=\"nameList\">Thomas J. Mitchell<\/li>\n               <li class=\"nameList\">Jason Alexander<\/li>\n               <li class=\"nameList Last\">Cameron Steer<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Computing interfaces are becoming increasingly sophisticated, with systems that engage\n                     multiple sensory channels simultaneously. Deformable and shape-changing interfaces\n                     offer rich tactile experiences, but there is limited understanding of how they can\n                     be combined with other modes of sensory feedback. We systematically explored the audio,\n                     visual and tactile cross-modal correspondences of deformable shapes with a particular\n                     focus on auditory feedback. 50 participants were asked to associate deformable tactile\n                     stimuli, varying in stiffness and shape, with the sound qualities pitch, brightness,\n                     fade-in time and fade-out time, under visuo-tactile and tactile-only conditions. Our\n                     findings provide the first insights on how (1) shape, both its form and visibility,\n                     play a significant role in associations for pitch and brightness; (2) stiffness plays\n                     a dominant role in associations over a sound\u2019s fade-in and fade-out times. These findings\n                     are distilled into the first design guidelines for integrating auditory feedback into\n                     physical interfaces.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            <h2><a id = \"Ambient_Technologies\">SESSION: Ambient Technologies<\/a><\/h2>\n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735692\">Ambient Awareness: Experiencing Always-On Displays in the Life of PV Households<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Arjun Rajendran Menon<\/li>\n               <li class=\"nameList Last\">Jorge Luis Zapico<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>The adoption of photovoltaic (PV) panels, electric vehicles (EVs), and dynamic electricity\n                     pricing is transforming households into active &#8220;prosumers&#8221; who generate, consume,\n                     and sell electricity. This shift, driven by rising costs and environmental concerns,\n                     requires new technologies to help households manage their production and consumption.\n                     Electricity\u2019s invisibility adds complexity, necessitating interfaces that make energy\n                     use and generation comprehensible. This paper presents the Always-On In-Home Display\n                     (AOIHD), a technology probe designed for prosumer households to navigate the dynamics\n                     of this production and consumption &#8211; balancing periods of solar abundance and grid\n                     reliance, by making energy data persistently and collectively accessible within the\n                     household. Adopting a practice theory lens, we explore how the AOIHD was experienced\n                     in daily life over a four-year autobiographical study and through deployments in other\n                     Swedish households. Our findings highlight four experiential qualities\u2014Learning, Triggering,\n                     Including, and Troubling\u2014that illustrate how the display supports the domestication\n                     of energy feedback technologies in prosumer contexts. We argue that fostering integration\n                     into household practices is key to sustaining meaningful interaction with smart energy\n                     technologies.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735763\">\u201cHello, This Is a Voice Assistant Calling\u201d: When a Human Voice Calls Claiming to Be\n                  a Machine on an Ordinary Day<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Jeesun Oh<\/li>\n               <li class=\"nameList\">Yunjae Choi<\/li>\n               <li class=\"nameList Last\">Sangsu Lee<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>With the advent of neural networks, it has become possible to generate synthetic voices\n                     that are nearly indistinguishable from real human speech (i.e., human-sounding voice).\n                     In contrast, earlier voice assistants used voices that were instantly recognizable\n                     as machine-generated, owing to their standardized, consistent, and highly intelligible\n                     qualities (i.e., artificial-sounding voice). Although people tend to prefer human-like\n                     voices, adopting human-sounding voices in voice assistants raises ethical concerns\n                     related to confusion or unintentional deception, particularly in voice-only contexts,\n                     even when their identity as systems is explicitly disclosed. To explore the voice\n                     design direction for future voice assistants, we examined how participants perceived\n                     and interacted when they were unexpectedly confronted with either an artificial-sounding\n                     or a human-sounding voice, both of which clearly identified themselves as voice assistants\n                     during an everyday phone call. Our findings reveal participants\u2019 experiences and conversational\n                     behaviors in each voice condition. Furthermore, we discuss how the voices of voice\n                     assistants should be designed and propose design implications that emphasize transparency\n                     and responsiveness in voice design.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            <h2><a id = \"Critical_Perspectives\">SESSION: Critical Perspectives<\/a><\/h2>\n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            <h2><a id =\"Designing_Generative_AI_Tools\">SESSION: Designing Generative AI Tools for Creative Work<\/a><\/h2>\n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            <h2><a id = \"Studying_Generative_AI_Use\">SESSION: Studying Generative AI Use in Creative Work<\/a><\/h2>\n            \n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735780\">The GenUI Study: Exploring the Design of Generative UI Tools to Support UX Practitioners\n                  and Beyond<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Xiang &#8216;Anthony Chen<\/li>\n               <li class=\"nameList\">Tiffany Knearem<\/li>\n               <li class=\"nameList Last\">Yang Li<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>AI can now generate high-fidelity UI mock-up screens from a high-level textual description,\n                     promising to support UX practitioners\u2019 work. However, it remains unclear how UX practitioners\n                     would adopt such Generative UI (GenUI) models in a way that is integral and beneficial\n                     to their work. To answer this question, we conducted a formative study with 37 UX-related\n                     professionals that consisted of four roles: UX designers, UX researchers, software\n                     engineers, and product managers. Using a state-of-the-art GenUI tool, each participant\n                     went through a week-long, individual mini-project exercise with role-specific tasks,\n                     keeping a daily journal of their usage and experiences with GenUI, followed by a semi-structured\n                     interview. We report findings on participants\u2019 workflow using the GenUI tool, how\n                     GenUI can support all and each specific roles, and existing gaps between GenUI and\n                     users\u2019 needs and expectations, which lead to design implications to inform future\n                     work on GenUI development.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735691\">Good Accessibility, Handcuffed Creativity: AI-Generated UIs Between Accessibility\n                  Guidelines and Practitioners\u2019 Expectations<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Alexandra-Elena Guri\u021b\u0103<\/li>\n               <li class=\"nameList Last\">Radu-Daniel Vatavu<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>The emergence of AI-powered UI generation tools presents both opportunities and challenges\n                     for accessible design, but their ability to produce truly accessible outcomes remains\n                     underexplored. In this work, we examine the effects of different prompt strategies\n                     through an evaluation of ninety interfaces generated by two AI tools across three\n                     application domains. Our findings reveal that, while these tools consistently achieve\n                     basic accessibility compliance, they rely on homogenized design patterns, which can\n                     limit their effectiveness in addressing specialized user needs. Through interviews\n                     with eight professional designers, we examine how this standardization impacts creativity\n                     and challenges the design of inclusive UIs. Our results contribute to the growing\n                     discourse on AI-powered design with (i) empirical insights into the capabilities of\n                     AI tools for generating accessible UIs, (ii) identification of barriers in this process,\n                     and (iii) guidelines for integrating AI into design workflows in ways that support\n                     both designers\u2019 creativity and design flexibility.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735686\">What About My Design Context?: Exploring the Use of Generative AI to Support Customization\n                  of Translational Research Artifacts<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Donghoon Shin<\/li>\n               <li class=\"nameList\">Tze-Yu Chen<\/li>\n               <li class=\"nameList\">Gary Hsieh<\/li>\n               <li class=\"nameList Last\">Lucy Lu Wang<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Despite the wealth of knowledge in research papers, practitioners struggle to apply\n                     research results to their work due to significant research-practice gaps. This study\n                     addresses the rigor-relevance paradox, where academic rigor can undermine the practical\n                     relevance of research for designers. Specifically, we explore the potential of large\n                     language models (LLMs) to customize translational research artifacts (<em>i.e.<\/em>, design cards) and improve relevance to specific designers\u2019 needs. In our preliminary\n                     study (<em>N<\/em> = 15), designers defined relevance as alignment between the content of the translational\n                     artifact and their design context\u2014including target users, modalities\/domains, and\n                     design stages. Based on these findings, we implemented an LLM-powered pipeline that\n                     allows designers to customize research papers into design cards tailored to their\n                     contexts. Our evaluation (<em>N<\/em> = 20) demonstrated that designers perceived customized artifacts as more relevant,\n                     actionable, valid, generative, and inspiring than those without customization\u2014even\n                     for less topically related papers\u2014indicating LLM-powered customization can be used\n                     to support research translation.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735785\">Exploring the Potential of Metacognitive Support Agents for Human-AI Co-Creation<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Frederic Gmeiner<\/li>\n               <li class=\"nameList\">Kaitao Luo<\/li>\n               <li class=\"nameList\">Ye Wang<\/li>\n               <li class=\"nameList\">Kenneth Holstein<\/li>\n               <li class=\"nameList Last\">Nikolas Martelaro<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Despite the potential of generative AI (GenAI) design tools to enhance design processes,\n                     professionals often struggle to integrate AI into their workflows. Fundamental cognitive\n                     challenges include the need to specify all design criteria as distinct parameters\n                     upfront (intent formulation) and designers\u2019 reduced cognitive involvement in the design\n                     process due to cognitive offloading, which can lead to insufficient problem exploration,\n                     underspecification, and limited ability to evaluate outcomes. Motivated by these challenges,\n                     we envision novel <em>metacognitive support agents<\/em> that assist designers in working more reflectively with GenAI. To explore this vision,\n                     we conducted exploratory prototyping through a Wizard of Oz elicitation study with\n                     20 mechanical designers probing multiple metacognitive support strategies. We found\n                     that agent-supported users created more feasible designs than non-supported users,\n                     with differing impacts between support strategies. Based on these findings, we discuss\n                     opportunities and tradeoffs of metacognitive support agents and considerations for\n                     future AI-based design tools.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            <h2><a id= \"Collaborative_and_Participatory_Design\">SESSION: Collaborative and Participatory Design<\/a><\/h2>\n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735436\">Designing Exchangeopoly: A Boardgame to Explore Value Exchange within Communities<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Simran Chopra<\/li>\n               <li class=\"nameList\">Harvey Everson<\/li>\n               <li class=\"nameList Last\">John Vines<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>In this pictorial, we discuss the design of Exchangeopoly, a boardgame developed to\n                     investigate exchanges between people in communities when they help each other out.\n                     Such exchanges are often acts of kindness for forms of volunteering that are not remunerated\n                     financially and are built on social capital. The boardgame scaffolded explorations\n                     of scenarios with participants where informal altruistic interactions in their communities\n                     are tokenised, rewarded and incentivised. We focus on the designed-in features and\n                     considerations that went into the visual and material production of the game and its\n                     gameplay mechanics. We discuss how Exchangeopoly was a valuable method that surfaced\n                     existing and speculated practices of exchange, and supported participants to explore\n                     the opportunities and problems of representing and rewarding such interactions. We\n                     contribute insights about the usefulness of exchangeopoly as a tool to explore scenarios\n                     and surface tensions about tokenisation in community value exchange.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            <h2><a id=\"Narrative_and_Storytelling\">SESSION: Narrative and Storytelling<\/a><\/h2>\n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735766\">Narrative Motion Blocks: Combining Direct Manipulation and Natural Language Interactions\n                  for Animation Creation<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Samuelle Bourgault<\/li>\n               <li class=\"nameList\">Li-Yi Wei<\/li>\n               <li class=\"nameList\">Jennifer Jacobs<\/li>\n               <li class=\"nameList Last\">Rubaiat Habib Kazi<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Authoring compelling animations often requires artists to come up with creative high-level\n                     ideas and translate them into precise low-level spatial and temporal properties like\n                     position, orientation, scale, and frame timing. Traditional animation tools offer\n                     direct manipulation strategies to control these properties but lack support for implementing\n                     higher-level ideas. Alternatively, AI-based tools allow animation production using\n                     natural language prompts but lack the fine-grained control over properties required\n                     for professional workflows. To bridge this gap, we propose AniMate, a hand-drawn animation system that integrates direct manipulation and natural language\n                     interaction. Central to AniMate are narrative motion blocks, clip-like components located on a timeline that let animators specify animated behaviors\n                     with a combination of textual and manual input. Through an expert evaluation and the\n                     creation of short demonstrative animations, we show how focusing on intermediate-level\n                     actions provides a common representation for animators to work across both interaction\n                     modalities.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735761\">From Sci-Fi Imagination to Everyday Interaction: A Narrative Framework for the Self-Awakening\n                  Journey of a Smart Lamp<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Bowen Kong<\/li>\n               <li class=\"nameList Last\">Rung-Huei Liang<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>In recent years, digital technologies have become increasingly autonomous, offering\n                     &#8220;mind-like&#8221; experiences of intelligent objects across various things, including smart\n                     home devices, social robots, and voice assistants. Drawing inspiration from the classic\n                     &#8220;mind awakening&#8221; narratives of intelligent things in science fiction, this study employs\n                     design fiction to integrate such storylines into everyday contexts. We present EvoLumen,\n                     a conceptual lamp designed to explore the emergent self-awareness of a thing. The\n                     lamp was deployed in the homes of five participants for one week, generating daily\n                     first-person narratives that sequentially covered environmental perception, emotional\n                     emulation, dream states, self-reflection, and farewell. Analysis of participant feedback\n                     and observations revealed the influence of detection accuracy, emotional triggers,\n                     and science fiction elements on perceptions of the lamp\u2019s self-awareness. Additionally,\n                     we emphasize the pivotal role of time in shaping the agency of things and propose\n                     a &#8220;narrative framework&#8221; to guide the development of more immersive and experiential\n                     digital companions.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            <h2><a id= \"Sustainability_and_Environmental_Awareness\">SESSION: Sustainability and Environmental Awareness<\/a><\/h2>\n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735439\">(Un)-blanketing Indigenous Climate Change Indicators for Understanding Local Climate\n                  Change<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Lizette Reitsma<\/li>\n               <li class=\"nameList\">Diana Azyln William<\/li>\n               <li class=\"nameList\">Meshack Nkosinathi Dludlu<\/li>\n               <li class=\"nameList\">Gugu Fortunate Sibandze<\/li>\n               <li class=\"nameList\">Molibeli Benedict Taele<\/li>\n               <li class=\"nameList\">Charles Tseole<\/li>\n               <li class=\"nameList Last\">Tariq Zaman<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>We (local researchers, Indigenous communities, local stakeholders, and a researcher\n                     bridging the places) have worked together over the last two years, in the project\n                     <em>Indigenous Climate Observatories, local knowledge for local action<\/em> to define such observatories, what they become when practiced and what this could\n                     mean. They focus on understanding climate change from local perspectives. Each of\n                     the explorations started with a blanket, used as a meeting space to initiate conversation\n                     &#8211; a design seed. They all resulted in a blanket which can be seen as manifestations\n                     of each of the different Indigenous Climate Observatories, a boundary object. This\n                     pictorial will present the blankets, how they were made and the role they had in the\n                     process. The blankets were of great importance to \u2018work knowledges together\u2019 . By\n                     doing this weaving of knowledges, we accept diverse forms of knowledge systems as\n                     equitable and respectfully learn together to understand climate change through a multitude\n                     of perspectives.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735432\">On the Habitabilities of Bacterial Cellulose for Living Artefacts<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Eduard Georges Groutars<\/li>\n               <li class=\"nameList\">Joana Martins<\/li>\n               <li class=\"nameList Last\">Elvin Karana<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>Bacterial cellulose (BC), also known as a Kombucha mat or SCOBY, is a grown material\n                     widely adopted in design and HCI communities due to its biodegradability, accessibility\n                     and mechanical versatility. Alongside these aspects, BC&#8217;s qualities to become a habitat\n                     for other living organisms, i.e., its habitabilities, have been researched in biotechnological\n                     sciences but not fully explored in design. In response to the call for biobased material\n                     alternatives and the expanding design space for multispecies interactions in HCI,\n                     in this paper, we unpack this habitability potential of BC in the design of living\n                     artefacts. Through visual storytelling we unveil our hands-on biolab journey with\n                     <em>Komagataeibacter<\/em>, the bacteria that produce BC, and show how fungi, microalgae and cyanobacteria can\n                     inhabit this material. We outline diverse options for tuning the habitabilities of\n                     BC to incite HCI designers in the creation of living artefacts that are fully grown\n                     and compatible with regenerative ecologies.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            <h2><a id=\"VR_and_AR\">SESSION: VR and AR<\/a><\/h2>\n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735793\">D360: a Tool for Supporting Rapid, Iterative, and Collaborative Analysis of 360\u00b0 Video<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Wo Meijer<\/li>\n               <li class=\"nameList\">Tilman Dingler<\/li>\n               <li class=\"nameList Last\">Gerd Kortuem<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Designers can immerse themselves into the world of users by using 360\u00b0 video leading\n                     to richer insights and better solutions. However, 360\u00b0 video is challenging to share\n                     and incompatible with existing tools, preventing designers from effectively integrating\n                     it into their iterative and collaborative workflows. To address these challenges,\n                     we developed D360, a tool that enables designers to view, annotate, and collaboratively\n                     analyze 360\u00b0 video. D360 features a web-based 360\u00b0 video viewing and annotation tool,\n                     a database, and Miro integration to analyze 360\u00b0 video using a familiar collaborative\n                     process. We evaluated D360 using walk-throughs with six professional designers that\n                     verified its utility and identified improvements to creating and presenting annotations.\n                     By providing both design directions for future 360\u00b0 video tools for designers and\n                     our open source tool, we enable practitioners and researchers to leverage the rich\n                     interaction and visual context of 360\u00b0 video for more impactful insights.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            <h2><a id =\"Community_and_Social_Design\">SESSION: Community and Social Design<\/a><\/h2>\n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735695\">HCI for Climate Resilience: Developing an Individual and Community Focused Framework\n                  through a Grounded Theory Approach<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Linda Hirsch<\/li>\n               <li class=\"nameList\">Daeun Hwang<\/li>\n               <li class=\"nameList\">Mj Johns<\/li>\n               <li class=\"nameList Last\">Katherine Isbister<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Natural hazards, such as floods and wildfires, increasingly impact our lives severely,\n                     requiring everyone living in risk areas to become climate resilient. Human-Computer\n                     Interaction (HCI) research has explored climate change and sustainable user behavior,\n                     but has yet to understand its role in developing and maintaining climate resilience.\n                     We approach the gap with a Grounded Theory approach, conducting 16 semi-structured\n                     expert interviews to understand individuals\u2019 and communities\u2019 ongoing challenges and\n                     needs and the role of technology in developing climate resilience. Results show that\n                     technology is deeply entangled in the process and supports and hinders factors for\n                     communicating, engaging, and empowering communities and individuals. Our work contributes\n                     to defining and structuring HCI\u2019s role in individuals\u2019 and communities\u2019 climate resilience\n                     with the framework <em>HCI for Climate Resilience of Individuals and Communities<\/em>. Additionally, we highlight open research and interdisciplinary collaboration opportunities\n                     to approach, maintain, and increase climate resilience from the bottom up.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735698\">Designing for Discourse: Social Media, Socio-Technical Rhetorical Strategies, and\n                  Affirmative Action Discussions<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Cassidy Pyle<\/li>\n               <li class=\"nameList\">Nicole B. Ellison<\/li>\n               <li class=\"nameList Last\">Nazanin Andalibi<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Social media platforms enable diverse users to engage in everyday political talk with\n                     (un)known audiences. Platform features and affordances may shape political discussions\n                     and how audiences make sense of them, potentially shifting political attitudes. Using\n                     affirmative action (AA) \u2013 a controversial, identity-centric higher education policy\n                     \u2013 as a context for analysis, we investigate social media features\u2019 and affordances\u2019\n                     role in AA discussions. Our qualitative content analysis of over 38,000 social media\n                     posts and comments across Reddit, Twitter\/X, and TikTok demonstrates how features\n                     (e.g., Green Screen) and affordances (e.g., visibility) shape the presentation of\n                     external evidence and cues on social media that help users determine information veracity.\n                     We introduce <em>socio-technical rhetorical strategies<\/em> to describe rhetorical devices enabled by platform features and affordances and consider\n                     how these strategies are used to express and refute racism online. Finally, we suggest\n                     ways that social media designers may leverage visibility, navigability, and association\n                     affordances to enhance users\u2019 ability to make sense of and safely experience AA discussions.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            <h2><a id = \"Customization_and_Personalization\">SESSION: Customization and Personalization<\/a><\/h2>\n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735693\">SiCo: An Interactive Size-Controllable Virtual Try-On Approach for Informed Decision-Making<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Sherry X. Chen<\/li>\n               <li class=\"nameList\">Alex Christopher Lim<\/li>\n               <li class=\"nameList\">Yimeng Liu<\/li>\n               <li class=\"nameList\">Pradeep Sen<\/li>\n               <li class=\"nameList Last\">Misha Sra<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Virtual try-on (VTO) applications aim to replicate the in-store shopping experience\n                     and enhance online shopping by enabling users to interact with garments. However,\n                     many existing tools adopt a one-size-fits-all approach when visualizing clothing items.\n                     This approach limits user interaction with garments, particularly regarding size and\n                     fit adjustments, and fails to provide direct insights for size recommendations. As\n                     a result, these limitations contribute to high return rates in online shopping. To\n                     address this, we introduce SiCo, a new online VTO system that allows users to upload\n                     images of themselves and interact with garments by visualizing how different sizes\n                     would fit their bodies. Our user study demonstrates that our approach significantly\n                     improves users\u2019 ability to assess how outfits will appear on their bodies and increases\n                     their confidence in selecting clothing sizes that align with their preferences. Based\n                     on our evaluation, we believe that SiCo has the potential to reduce return rates and\n                     transform the online clothing shopping experience.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735771\">To Each Their Own: Exploring Highly Personalised Audiovisual Media Accessibility Interventions\n                  with People with Aphasia<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Alexandre Nevsky<\/li>\n               <li class=\"nameList\">Filip Bircanin<\/li>\n               <li class=\"nameList\">Elena Simperl<\/li>\n               <li class=\"nameList\">Madeline N Cruice<\/li>\n               <li class=\"nameList Last\">Timothy Neate<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Digital audiovisual media (e.g., TV, streamed video) is an essential aspect of our\n                     modern lives, yet it lacks accessibility \u2013 people living with disabilities can experience\n                     significant barriers. While accessibility interventions can improve the access to\n                     audiovisual media, people living with complex communication needs have been under-represented\n                     in research and are potentially left behind. Future visions of accessible digital\n                     audiovisual media posit highly personalised content that meets complex accessibility\n                     needs. We explore the impact of such a future by conducting bespoke co-design sessions\n                     with people with aphasia \u2013 a language impairment common post-stroke \u2013 creating four\n                     highly personal accessibility interventions that leverage audiovisual media personalisation.\n                     We then trialled these prototypes with 11 users with aphasia; examining the effects\n                     on shared social experiences, creative intent, interaction complexity, and feasibility\n                     for content producers. We conclude by critically reflecting on future implementations,\n                     raising open questions and suggesting future research directions.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735795\">Customizable AI for Depression Care: Improving the User Experience of Large Language\n                  Model-Driven Chatbots<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Yi Li<\/li>\n               <li class=\"nameList\">Xuanxuan Ding<\/li>\n               <li class=\"nameList\">Yifan Chen<\/li>\n               <li class=\"nameList\">Yeye Li<\/li>\n               <li class=\"nameList Last\">Nan Ma<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Large Language Models (LLMs) demonstrate significant potential in the field of mental\n                     health. However, existing chatbots often lack personalized designs, which may limit\n                     their ability to fully address the complex needs of users with depression. This study\n                     builds upon the previously developed CloudEcho system, a mental health management\n                     application that integrates emotion monitoring and psychological support functions,\n                     to explore the impact of role customization features on user trust and customization\n                     experience. Using a mixed-methods approach, this study compares the differences between\n                     the system with role customization features and its original version. Quantitative\n                     results indicate that role customization can enhance user trust, showcasing high usability\n                     and satisfaction. Qualitative interviews further reveal the strengths and limitations\n                     of this feature and suggest directions for optimization. Together, these findings\n                     highlight the potential value of chatbot role customization in mental health support\n                     and offer theoretical and practical guidance for future LLM-driven personalized design\n                     and optimization in mental health contexts.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            \n            \n            <h2><a id =\"Generative_AI_as_Design_Material\">SESSION: Generative AI as Design Material<\/a><\/h2>\n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735414\">Towards Holistic Prompt Craft<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Joseph Lindley<\/li>\n               <li class=\"nameList Last\">Roger Whitham<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>We present an account of an ongoing practice-based Design Research programme that\n                     explores real-time AI image generation. Based on three installations, we reflect on\n                     the design of <em>PromptJ<\/em>, a user interface built around the concept of a prompt \u2018mixer\u2019. We present a series\n                     of strong concepts based on the design and deployment of PromptJ. Later, we cohere\n                     and abstract our strong concepts into the notion of <em>Holistic Prompt Craft<\/em>, which describes the importance of considering all relevant parameters concurrently.\n                     Finally, we present <em>PromptTank<\/em>, a prototype design which exemplifies these principles. Our contributions are articulated\n                     as strong concepts or intermediate knowledge, intended to be used generatively by\n                     informing and inspiring practitioners and researchers working in this space.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735437\">Hidden Layer Interaction: A Technique to Explore the Material of Generative AI<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Imke Grabe<\/li>\n               <li class=\"nameList Last\">Tom Jenkins<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>This pictorial describes the process of developing an interaction technique for directly\n                     engaging with the hidden layers of a generative AI model for image synthesis. First,\n                     we give some background to generative AI in HCI, arguing that current interaction\n                     techniques prevent us from directly interacting with the material of AI, foreclosing\n                     its use in design. Drawing on inspiration from the Computer Science field of feature\n                     visualization, we investigate the materiality of our prototype, a GAN model trained\n                     to generate fashion imagery, and show how Hidden Layer Interaction offers an alternative\n                     to standard prompting. In doing so, we illustrate how this change in approach leads\n                     to new forms of interaction with the internal semantics of generative AI, and demonstrate\n                     how one might use Hidden Layer Interaction to engage with AI as a material in design.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            <h2><a id =\"Visualization_and_Physicalization\">SESSION: Visualization and Physicalization<\/a><\/h2>\n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735425\">Flipping Perspectives: Visualising Digital Smell Training<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Ceylan Be\u015fevli<\/li>\n               <li class=\"nameList\">Ana Marques<\/li>\n               <li class=\"nameList\">Giada Brianza<\/li>\n               <li class=\"nameList\">Christopher Dawes<\/li>\n               <li class=\"nameList Last\">Marianna Obrist<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>Recovering a lost ability is rarely easy, but unlike strengthening a muscle, progress\n                     in <em>smell training<\/em>, repeated exposure to specific scents to support recovery or maintain function, is\n                     often invisible. For those undergoing Digital Smell Training (DST), data visualisations\n                     may be the only markers of change. <em>But how can graphs and numbers sustain motivation over months of slow, unpredictable\n                        recovery?<\/em> This pictorial adopts a Research through Design approach to explore how data visualisations\n                     might better support motivation, adherence, and long-term engagement in DST. We draw\n                     on a six-month in-home study with 18 participants with varying olfactory abilities\n                     using a technology probe. Following initial feedback, we ran a co-design workshop\n                     to understand participants\u2019 visualisation needs. These insights informed eight design\n                     directions and three visualisation concepts, later evaluated in a focus group. We\n                     explore how visualisations help <em>\u2018flip perspectives\u2019<\/em>, from tracking outcomes to nurturing perseverance across the uncertain journey of\n                     smell rehabilitation.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735433\">From Euclidean to Topological: Visual Exploration of Transformations Types in Shape-Changing\n                  Interfaces<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList Last\">Majken Kirkegaard Rasmussen<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>This pictorial explores the challenge of distinguishing between shape-changing and\n                     actuated interfaces by applying a foundational geometry framework from mathematics\n                     to categorise different types of transformations. The framework introduces six geometric\n                     transformation types: Euclidean transformations, similarity transformations, affine\n                     transformations, projective transformations, topological transformations, and non-topological\n                     transformations. Through visual analysis, the pictorial contributes a new vocabulary\n                     for de-scribing the transformations of shape-changing interfaces. It offers reflections\n                     on which transformations can be considered shape changing, as well as how features\n                     of the physical design might impact the perceived shape change.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735774\">From Diagrams to Experience: Data Visceralisation of Ecosystem State-and-Transition\n                  Models in Virtual Reality<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Ad\u00e9la\u00efde Genay<\/li>\n               <li class=\"nameList\">Michael James Neylan<\/li>\n               <li class=\"nameList\">Warwick Laird<\/li>\n               <li class=\"nameList\">Thomas Romanis<\/li>\n               <li class=\"nameList\">Katrina Szetey<\/li>\n               <li class=\"nameList\">Anna E Richards<\/li>\n               <li class=\"nameList\">Bernhard Jenny<\/li>\n               <li class=\"nameList Last\">Tom Chandler<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Communicating complex scientific concepts to non-experts is a persistent challenge.\n                     The communication of ecological state-and-transition models (STMs) through box-and-arrow\n                     diagrams is one example. This paper explores how virtual reality (VR) can make STMs\n                     more accessible. Using ecosystem STMs as a case study, we present a proof-of-concept\n                     system enabling users to viscerally experience the content of the model. We followed\n                     a three-phased participatory design process: first, 2 ecology experts guided the development\n                     of a VR prototype. Next, 17 government environmental management professionals evaluated\n                     its utility and features. Finally, after refining the system, 12 VR researchers informed\n                     design considerations and improvements. Our findings provide practical insights for\n                     visualising STMs in VR, and also contribute to the emerging field of \u201cdata visceralisation\u201d.\n                     We found this approach engages users and supports understanding of qualitative aspects\n                     of real-world phenomena. However, complex models like ecosystem STMs require the creation\n                     of accurate and extensive simulations. We conclude with a discussion for future directions.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735760\">Making Local Data Memoirs: Changing Orientations in Relation to Environmental Concerns<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Sylvia Janicki<\/li>\n               <li class=\"nameList Last\">Yanni Alexander Loukissas<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>If we want to understand the affective power of data in shaping our experiences of\n                     place, we need new forms of inquiry that reveal what data can do: not just for us,\n                     but to us. In this paper, we explore how engaging with data changes the way we feel\n                     about environmental concerns that we live with through an approach we call local data\n                     memoir. This approach brings together autobiographical making and writing with theorization\n                     through a phenomenological lens, specifically using the concept of orientation. We\n                     find that attending to our own orientation can be a useful means of tracking how our\n                     lived experiences are shaped by practices with data. Our contributions are two-fold:\n                     first, we demonstrate how the phenomenological concept of orientation can be used\n                     to interpret encounters with data; second, we introduce local data memoirs as a form\n                     of affective inquiry with data.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735784\">ChartChecker: A User-Centred Approach to Support the Understanding of Misleading Charts<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Tom Biselli<\/li>\n               <li class=\"nameList\">Katrin Hartwig<\/li>\n               <li class=\"nameList\">Niklas Kneissl<\/li>\n               <li class=\"nameList\">Louis Pouliot<\/li>\n               <li class=\"nameList Last\">Christian Reuter<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Misinformation through data visualisation is particularly dangerous because charts\n                     are often perceived as objective data representations. While past efforts to counter\n                     misinformation have focused on text and, to some extent, images and video, developing\n                     user-centred strategies to combat misleading charts remains an unresolved challenge.\n                     This study presents a conceptual approach through ChartChecker, a browser-plugin that\n                     aims to automatically extract line and bar chart data and detect potentially misleading\n                     features such as non-linear axis scales. A participatory design approach was used\n                     to develop a user-centred interface to provide transparent, comprehensible information\n                     about potentially misleading features in charts. Finally, a think-aloud study (N =\n                     15) with ChartChecker revealed overall satisfaction with the tools\u2019 user interface,\n                     comprehensibility, functionality, and usefulness. The results are discussed in terms\n                     of improving user engagement, increasing transparency and optimising tools designed\n                     to counter misleading information in charts, leading to overarching design implications\n                     for user-centred strategies for the visual domain.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            <h2><a id =\"Design_for_Specific_Contexts\">SESSION: Design for Specific Contexts<\/a><\/h2>\n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735697\">Driving with Algorithms Beyond Gig Work: Investigating How Algorithmic Management\n                  Affects Workers\u2019 Practices in an On-Demand Ride-Pooling Service<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Yongjae Sohn<\/li>\n               <li class=\"nameList\">Daehyun Kwak<\/li>\n               <li class=\"nameList\">Sehee Son<\/li>\n               <li class=\"nameList\">Chowon Kang<\/li>\n               <li class=\"nameList Last\">Youn-kyung Lim<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>On-demand ride-pooling (ODRP) services are a new alternative modes of transportation\n                     that have recently emerged due to technological advancements. While algorithmic management\n                     plays a crucial role in ODRP services and can create complex workplace dynamics, the\n                     experiences of ODRP workers remain underexplored in the HCI field. To address this\n                     gap, we interviewed 16 drivers of Shucle, an ODRP service in South Korea. We examined\n                     the drivers\u2019 detailed work practices, focusing on the perceived challenges of working\n                     under algorithmic management and the perceived benefits and necessity of algorithmic\n                     management. This paper provides empirical evidence of the impact of algorithmic management\n                     on ODRP drivers\u2019 work environments and discusses the implications of our findings\n                     for supporting algorithmic workplaces in ODRP services. By positioning ODRP drivers\n                     as company employees embedded within a vast, dynamic traffic environment, our study\n                     extends algorithmic management scholarship beyond gig work and other algorithmic work\n                     contexts, offering fresh insights into how autonomy and accountability are configured\n                     across algorithmic workplaces.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735798\">Chat with Standards: An Assistant for the Provision of Normative Knowledge for Practical\n                  Use in Welding<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Nils Wakan Boecking<\/li>\n               <li class=\"nameList\">Parastou Azari Gargari<\/li>\n               <li class=\"nameList\">Sarah Reichel<\/li>\n               <li class=\"nameList\">Sven Hoffmann<\/li>\n               <li class=\"nameList\">Tobias Richter<\/li>\n               <li class=\"nameList Last\">Volker Wulf<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Standards form an important basis for the manufacture of high-quality and safe products.\n                     As the standards landscape becomes more complex over time, the mandatory interpretation\n                     is becoming increasingly challenging and knowledge gained from experience in dealing\n                     with the standards plays a key role. This paper uses welding\u2014as a manufacturing process\n                     that is highly regulated by standards\u2014to demonstrate the possibilities that large\n                     language models (LLMs) offer to assist people and shows how knowledge management can\n                     be supported in applying these standards. Therefore, a chatbot prototype specialising\n                     in the specific requirements of welding standards was developed and evaluated on the\n                     methodological framework of a design case study. The results show that LLMs have the\n                     potential to improve access to complex standards beyond simple databases and document\n                     searches and facilitate compliance with these requirements. However, there are certain\n                     limitations regarding normative language and the need for referencing.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            <h2><a id =\"Reflection_and_Self-Awareness\">SESSION: Reflection and Self-Awareness<\/a><\/h2>\n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735790\">LuciEntry: Towards Understanding the Design of Lucid Dream Induction<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Po-Yao (Cosmos) Wang<\/li>\n               <li class=\"nameList\">Xiao Zoe Fang<\/li>\n               <li class=\"nameList\">Gabriel Ducos<\/li>\n               <li class=\"nameList\">Nathaniel Yung Xiang Lee<\/li>\n               <li class=\"nameList\">Antony Smith Loose<\/li>\n               <li class=\"nameList\">Rohit Rajesh<\/li>\n               <li class=\"nameList\">Nethmini Botheju<\/li>\n               <li class=\"nameList\">Eric Chen<\/li>\n               <li class=\"nameList\">Maria F. Montoya<\/li>\n               <li class=\"nameList\">Alexandra Kitson<\/li>\n               <li class=\"nameList\">Karen Konkoly<\/li>\n               <li class=\"nameList\">Rohan Sagi<\/li>\n               <li class=\"nameList\">Rakesh Patibanda<\/li>\n               <li class=\"nameList\">Nathan W Whitmore<\/li>\n               <li class=\"nameList\">Mahdad Jafarzadeh Esfahani<\/li>\n               <li class=\"nameList\">Jialin Deng<\/li>\n               <li class=\"nameList\">Jiajun Bu<\/li>\n               <li class=\"nameList\">Martin Dresler<\/li>\n               <li class=\"nameList\">Don Samitha Elvitigala<\/li>\n               <li class=\"nameList\">Nathan Arthur Semertzidis<\/li>\n               <li class=\"nameList Last\">Florian \u2018Floyd\u2019 Mueller<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Lucid dreaming, a state in which people become aware that they are dreaming, is known\n                     for its many mental and physical health benefits. However, most lucid dream induction\n                     techniques, such as reality testing, require significant time and effort to master,\n                     creating a barrier for people seeking these experiences. We designed LuciEntry, a\n                     portable interactive prototype aimed at helping people induce lucid dreaming through\n                     well-timed visual and auditory cues. We conducted a lab and a field study to understand\n                     LuciEntry\u2019s user experience. The interview data allowed us to identify three themes.\n                     Building on these findings and our design practice, we derived seven considerations\n                     to guide the design of future lucid dream systems. Ultimately, this work aims to inspire\n                     further research into interactive technologies for altered states of consciousness.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735675\">Lino: An Interactive System for Daily Mood Recordings Supporting Meaning-Making through\n                  Single Stroke Drawing Approach<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Nanum Kim<\/li>\n               <li class=\"nameList\">Sangsu Jang<\/li>\n               <li class=\"nameList\">Hansol Kim<\/li>\n               <li class=\"nameList\">Dayoung Shin<\/li>\n               <li class=\"nameList Last\">Young-Woo Park<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>Mood is influenced by complex factors and involves subjective interpretation, leading\n                     to diverse methods of recording it. While existing tools provide customizable features,\n                     they often fall short in promoting deep reflection and meaningful engagement. We developed\n                     Lino, an interactive system that includes single stroke drawing records created in\n                     a mobile app and a desktop frame designed for archiving these drawings and supporting\n                     the attachment of optional voice recordings. Through a three-week field study with\n                     six participants, we found that participants make meaning in the process of reframing\n                     their daily moods into single stroke drawings and continuously refined these recordings\n                     through interactions in their everyday spaces. Our findings imply considerations for\n                     empowering users through personal interpretation for meaning-making process in data\n                     collection and visualization for effective personal informatics system and supporting\n                     evolving personal reflective practices.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            <h2><a id=\"Design_Methods_and_Processes\">SESSION: Design Methods and Processes<\/a><\/h2>\n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735423\">A language of one&#8217;s own: annotations and layering as design material<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Elvia Vasconcelos<\/li>\n               <li class=\"nameList\">Kristina Andersen<\/li>\n               <li class=\"nameList\">Bruna Goveia da Rocha<\/li>\n               <li class=\"nameList Last\">Troy Nachtigall<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>As designers, we often use annotations to add information and reflection to our work.\n                     We would like to suggest that these annotations let personal design languages emerge.\n                     We experiment with the materiality of the pictorial format itself to show how such\n                     languages emerge from a particular body of work, as it travels from sketches, text,\n                     and various material artefacts. This emergent language uses annotations and layers\n                     as design materials, providing access to the nuances in the thinking behind our research\n                     and making processes. Through this language, the voice of the maker emerges revealing\n                     subjective and situated knowledges that would not be available otherwise. We reflect\n                     on these insights and share a set of strategies using annotations and layerings for\n                     others to use. As a result, we contribute an approach of annotations and layering\n                     to engage complexity when making, reflecting, and disseminating design research.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735407\">Filling the Hive: A Reflective Toolkit for Community-led Rural Development<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Chiara Leonardi<\/li>\n               <li class=\"nameList\">Eleonora Mencarini<\/li>\n               <li class=\"nameList Last\">Elena Not<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>Designing socio-technical systems for rural areas requires empowering local actors\n                     to actively engage in a creative process that thoroughly considers the unique characteristics\n                     of these territories. Methods intended for urban contexts often fail to account for\n                     rural communities\u2019 specific values, cultures, and needs. Furthermore, the focus is\n                     usually on the design of the digital part of the innovation, its requirements, and\n                     technological constraints, leaving out other essential enablers. To address this,\n                     we present a toolkit designed to boost reflection on the key \u2018&#8217;ingredients&#8217;\u2019 \u2014 social,\n                     economic, technological, political, and infrastructural \u2014 essential for addressing\n                     rural challenges through socio-technical interventions. The toolkit offers tangible,\n                     user-friendly resources that encourage dialogue, self-reflection, and the creative\n                     envisioning of rural transformations. It includes inspirational cardsas well asreflectionquestionsto\n                     support communities in evaluating their needs, resources, and aspirations, fosteringself-awareness,\n                     learning, andaction.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735418\">LINKING THEORY AND PRACTICE: Developing an Image-Schema-based Design Tool for Closeness\n                  Technologies<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Cordula Baur<\/li>\n               <li class=\"nameList\">Tamara Friedenberger<\/li>\n               <li class=\"nameList\">Franzisca Maas<\/li>\n               <li class=\"nameList\">Louisa Maurer<\/li>\n               <li class=\"nameList Last\">J\u00f6rn Hurtienne<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>People use technology to stay in touch with their family and friends. To design novel\n                     technologies that bring us closer, the theoretical concept of image schemas is a perfect\n                     fit. Image schemas are abstract representations of embodied experiences which can\n                     be used to design intuitive, inclusive and innovative technologies. However, their\n                     application in design processes requires additional effort and time, while existing\n                     design tools often lack a theoretical foundation for social closeness. To address\n                     this gap, we sourced domain specific image schemas and conducted iterative user-centred\n                     research through design, to create an easy-to-use image schema design tool which facilitates\n                     the creation of closeness technologies. In this pictorial, we document our process\n                     and provide a design tool that connects theory with actionable design strategies,\n                     providing designers with clear guidance and a practical tool for metaphorical integration.\n                     The tool can be found at https:\/\/osf.io\/twndg\/.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735796\">Explainable AI for Daily Scenarios from End-Users\u2019 Perspective: Non-Use, Concerns,\n                  and Ideal Design<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Lingqing Wang<\/li>\n               <li class=\"nameList\">Chidimma Lois Anyi<\/li>\n               <li class=\"nameList\">Kefan Xu<\/li>\n               <li class=\"nameList\">Yifan Liu<\/li>\n               <li class=\"nameList\">Rosa I. Arriaga<\/li>\n               <li class=\"nameList Last\">Ashok K. Goel<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Centering humans in explainable artificial intelligence (XAI) research has primarily\n                     focused on AI model development and high-stake scenarios. However, as AI becomes increasingly\n                     integrated into everyday applications in often opaque ways, the need for explainability\n                     tailored to end-users has grown more urgent. To address this gap, we explore end-users\u2019\n                     perspectives on embedding XAI into daily AI application scenarios. Our findings reveal\n                     that XAI is not naturally accepted by end-users in their daily lives. When users seek\n                     explanations, they envision XAI design that promotes contextualized understanding,\n                     empowers adoption and adaption to AI systems, and considers multistakeholders\u2019 values.\n                     We further discuss supporting users\u2019 agency in XAI non-use and alternatives to XAI\n                     for managing ambiguity in AI interactions. Additionally, we provide design implications\n                     for XAI design at personal and societal levels. These include understanding users\n                     through a computational rationality lens, adaptive design that coevolves with users,\n                     and advancing the &#8220;society-in-the-loop&#8221; vision with everyday XAI.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735676\">When Personas Talk to You: Evaluating the Evolution of User Personas from Static Profiles\n                  to Conversational User Interfaces<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Ilkka Kaate<\/li>\n               <li class=\"nameList\">Joni Salminen<\/li>\n               <li class=\"nameList\">Soon-Gyo Jung<\/li>\n               <li class=\"nameList\">Trang Thi Thu Xuan<\/li>\n               <li class=\"nameList\">Jinan Y. Azem<\/li>\n               <li class=\"nameList\">Jo\u00e3o M. Santos<\/li>\n               <li class=\"nameList Last\">Bernard J Jansen<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>The development of persona systems provides a possibility for end users to interact\n                     with different persona modalities. In a 54-participant randomized controlled experiment,\n                     we compare two persona interaction modalities, document and dialogue personas, both\n                     generated using AI approaches from survey data. Overall, dialogue personas appear\n                     to be perceived more favorably than document personas. However, document personas\n                     exhibit a wider range of perceptions, suggesting that experiences with document personas\n                     are more polarizing among users. The document personas had higher transparency and\n                     were perceived as more complete, but the task completion was perceived as more difficult,\n                     although the task success rate was higher. The dialogue personas were perceived as\n                     more usable, with a higher System Usability Scale score, and more enjoyable. Our findings\n                     provide critical insights into the increasingly important area of persona interaction\n                     modalities and the broad paradigm of human-persona interaction.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735794\">The Quality of Speculation \u2013 A Scoping Review<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Ronda Ringfort-Felner<\/li>\n               <li class=\"nameList\">Judith D\u00f6rrenb\u00e4cher<\/li>\n               <li class=\"nameList Last\">Marc Hassenzahl<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>In Human-Computer Interaction, speculative design is widely used to explore the opportunities\n                     and challenges of future technologies. However, the criteria that define a \u201cgood\u201d\n                     speculation are scattered throughout the literature. This challenges the application\n                     and evaluation of speculative design especially in an academic context. Through a\n                     review of 63 publications on speculative design, design fiction, and critical design,\n                     we identified nine key qualities of speculations that can be grouped in three categories:\n                     speculative, discursive, and process. Speculative qualities (i.e., fictional, critical,\n                     socio-political) reflect the fundamental characteristics of speculative design. Discursive\n                     qualities (i.e., experienceable, thought-provoking) facilitate envisioning and debate.\n                     Process qualities (i.e., grounded, participative, reflected, playful) encourage an\n                     inclusive, responsible, scientifically based and creative approach to speculative\n                     design. We propose this as a descriptive taxonomy of qualities, which can serve as\n                     a starting point for the creation and evaluation of high-quality speculative designs\n                     in diverse contexts, including academic peer review.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            <h2><a id = \"Tangible_and_Material_Interfaces\">SESSION: Tangible and Material Interfaces<\/a><\/h2>\n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735409\">Empowering Sustainable E-Textiles: DIY Biofiber Wet Spinning for Community Material\n                  Exploration<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Jingwen Zhu<\/li>\n               <li class=\"nameList\">Megan Wu<\/li>\n               <li class=\"nameList\">Ruth Zhao<\/li>\n               <li class=\"nameList\">Samantha Chang<\/li>\n               <li class=\"nameList Last\">Cindy Hsin-Liu Kao<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>Recent research in e-textiles within the HCI community has shown a growing interest\n                     in sustainable prototyping to reduce the environmental impact of waste generated during\n                     e-textile fabrication. Meanwhile, the textile crafts community is exploring alternative\n                     sustainable materials. Despite shared goals, communication, knowledge exchange, and\n                     collaboration between these two disciplines remain limited. This work leverages HCI\n                     knowledge in open-source wet spinning and biofiber recipes to empower individuals\n                     in the textile crafts community to create functional biodegradable yarns for e-textile\n                     prototyping at home or in individual textile studios. To better understand their material\n                     exploration needs, we hosted a community-engaged workshop. Our findings emphasized\n                     the need for user-friendly machine designs, the value of hands-on learning, and the\n                     benefits of iterative exploration for examining the design affordances of material\n                     temporality. Through these efforts, we aim to promote sustainable making via community\n                     engagement and provide more widely available technical tools and curriculum resources\n                     for material-driven craft explorations.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735416\">E-Sewing: Exploring the Design Space of Machine-Sewing E-Textile Circuits<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Salma Ibrahim<\/li>\n               <li class=\"nameList Last\">Sara Nabil<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>This pictorial presents a design research exploration of domestic sewing machines\n                     as hybrid craft tools for creating e-textile circuits. Through iterative making and\n                     experimentation, we examine how conductive materials and electronic components can\n                     be integrated into fabric using sewing machines. We contribute 11 techniques for securely\n                     terminating connections (T1-T4), insulating wires (I1-I4), and design possibilities\n                     for sewing LEDs and electronic components (A1-A3). We also introduce four types of\n                     machine-sewn sensors (S1-S4) for interactivity and present four high-fidelity prototypes\n                     of machine-sewn circuits. To further explore the creative potential of these techniques,\n                     we engaged in a case study with a craft practitioner that uncovers design insights\n                     and limitations. Reflecting on these explorations, we highlight the role of sewing\n                     machines in democratizing e-textile design and advancing their use as accessible tools\n                     for hybrid fabrication.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735696\">WovenCircuits: A 3-Step Fabrication Process for Weaving Electric Circuit Layouts in\n                  Everyday Artefacts<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Ahmed Awad<\/li>\n               <li class=\"nameList\">Salma Ibrahim<\/li>\n               <li class=\"nameList Last\">Sara Nabil<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Previous work explored techniques for creating woven e-textiles, emphasizing interactive\n                     input and output elements. However, the integration of electrical connections and\n                     circuitry remains underexplored. Using Research through Design (RtD), we present WovenCircuits,\n                     a design-led inquiry into combining traditional weaving methods with computational\n                     design on digital Jacquard looms to create woven circuit schematics. Through iterative\n                     design experiments, we developed a 3-step process and characterized three fabrication\n                     techniques to: 1) weave insulated electrodes, 2) integrate rigid components into fabric,\n                     and 3) create woven electrical connections. We further examined their electrical behaviour\n                     through key design factors and evaluated the effect of washability on resistance and\n                     dimensions. To demonstrate its potential, we designed and built six high-fidelity\n                     research products showcasing diverse applications in interactive everyday objects.\n                     Finally, we reflect on the design opportunities and limitations of WovenCircuits,\n                     contributing to the growing body of knowledge on woven e-textiles.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            <h2><a id =\"Critical_Materials\">SESSION: Critical Materials &amp; Making<\/a><\/h2>\n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735441\">Towards Yarnier Interactive Textiles: Mapping a Design Journey through Hand Spun Conductive\n                  Yarns<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Etta W Sandry<\/li>\n               <li class=\"nameList\">Lily M Gabriel<\/li>\n               <li class=\"nameList\">Eldy S. Lazaro Vasquez<\/li>\n               <li class=\"nameList Last\">Laura Devendorf<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>The ability to create a wide and varied set of interactive textiles depends on the\n                     materials that one has available. Currently, the range of yarns that can be used to\n                     bring interactivity to textiles is greatly limited, especially considering the diversity\n                     available in non-conductive yarns. This pictorial traces a design journey into hand\n                     spinning that seeks to address this limitation and contributes samples of techniques\n                     and materials that could be used to create conductive yarns along with reflection\n                     on design methods that enabled us to explore a wider range of aesthetic expressions.\n                     We advocate for an approach that reconnects with the textiles in e-textiles, embraces\n                     divergence, and prioritizes the material rather than function as the driver of a design\n                     concept. We offer pathways for readers and researchers to continue this exploration\n                     within varied domains and practices.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735413\">Making Seafoam: An Autobiographical Design Journey Engaging Local Ecologies Through\n                  Making<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Fernanda Soares da Costa<\/li>\n               <li class=\"nameList\">Mariana Simoes<\/li>\n               <li class=\"nameList\">Frederico Duarte<\/li>\n               <li class=\"nameList Last\">Valentina Nisi<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>Sustainable HCI (SHCI) Researchers are increasingly attuned to environmental issues\n                     in material creation, guided by a posthumanist framework that decentres the human-maker,\n                     accounting for nonhuman agencies. Applying \u2018noticing\u2019 as a method, we sourced sea-derived\n                     matter\u2014often dismissed as waste\u2014to make a tangible material we call SeaFoam; to achieve\n                     this, we gathered seaweeds and oyster shells and developed methods and tools in a\n                     kitchen-laboratory makerspace. This pictorial documents a design journey that includes\n                     a Do-it-Yourself (DIY) process of agar extraction (SeaFoam&#8217;s key ingredient) and explorations\n                     with oyster powder to enrich SeaFoam&#8217;s texture. Through the first author&#8217;s autobiographical\n                     Research through Design journalling, we reflect on the evolving relationship between\n                     human-makers and the ecologies of once-living matter and discuss their potential application\n                     in interactive artefacts. This work offers the DIS community an account of first-person\n                     methods combined with Material-Driven Methodologies to enrich the possibilities of\n                     biomaterial creations for interactive applications.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735410\">Yarn as a Means to Give Form to Entanglements of Regulation, Design and Sustainability\n                  Practices<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Anton Poikolainen Ros\u00e9n<\/li>\n               <li class=\"nameList\">Chiara Rossitto<\/li>\n               <li class=\"nameList\">Fatemeh Bakhshoudeh<\/li>\n               <li class=\"nameList\">Rob Comber<\/li>\n               <li class=\"nameList Last\">Stanley J Greenstein<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>When designing with and for complex sustainability processes like waste management,\n                     it is crucial to understand digital technologies as entangled with broader systemic\n                     factors, including physical infrastructures and regulatory instruments. Within the\n                     case of organic household waste management, this pictorial aims at making such relations\n                     visible through design methods. We have used yarn to represent the different threads\n                     of these entanglements and defined specific configurations: tangles, knots, loose\n                     ends, and frayed threads. We discuss how the design practice of giving form to these\n                     entanglements can make complex relations between digital technology, infrastructures,\n                     and regulatory instruments more visible and actionable for HCI, and explore how digital\n                     technologies are \u2013 and can be \u2013 made to work within them.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            <h2><a id=\"Designing_for_Specific_User_Groups\">SESSION: Designing for Specific User Groups<\/a><\/h2>\n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735430\">Designing Aging Reflection Probes to Elicit Self-Perception of Aging (SPA) Beliefs\n                  of Older Adults in India<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Neeta M Khanuja<\/li>\n               <li class=\"nameList\">Valentina Nisi<\/li>\n               <li class=\"nameList Last\">Jodi Forlizzi<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>In this submission, I present my research on the design of technological interventions\n                     to enhance self-perceptions of aging (SPA) and the well-being of older adults in India.\n                     My work focuses on aging experiences and technology opportunities for older adults\n                     in diverse living environments in urban India. These living environments include private\n                     homes (aging in place), assisted living townships (retirement homes), and old age\n                     institutions. Through interviews, the design of an Aging Reflection Probe Kit and\n                     field engagements, I highlight key aspects of SPA among older adults. These include\n                     social presence, self-efficacy, activities, age-related transitions, life satisfaction,\n                     agency, self-value, age associations, and emotional well-being. I propose a Research\n                     through Design (RtD) approach to explore how technological interventions can operationalize\n                     SPA theories in HCI and contribute to enhancing older adults\u2019 well-being. In addition,\n                     I will examine how older adults in diverse living environments adopt and adapt these\n                     technologies and how these interventions shape their aging experiences, environments,\n                     and social networks.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735443\">Serious Games: Charting Refugee Entrepreneurial Journeys Through Novel Analytic Mapping<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Chuike Lee<\/li>\n               <li class=\"nameList\">Awais Hameed Khan<\/li>\n               <li class=\"nameList\">Stephen Viller<\/li>\n               <li class=\"nameList Last\">Dhaval Vyas<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p><em>\u201cMore than 90% of global new settlement needs are unmet\u201d<\/em> according to UNHCR. Among those displaced are refugees and asylum seekers\u2014seeking\n                     new life and opportunities in Australia. Despite challenges in reassimilating into\n                     foreign society, some strive to become successful entrepreneurs. We conducted semi-structured\n                     interviews with 15 such refugee and asylum seeker-entrepreneurs in Australia; documenting\n                     and mapping their journeys. This pictorial presents a novel annotation tool to present\n                     (1) the supporting role of organizations, community, and family in facilitating entrepreneurial\n                     success; and (2) the role of digital platforms in self-learning, professional skill\n                     development and, achieving business goals. The main contributions of this pictorial\n                     highlight emergent strategies that demonstrate the resilience of refugee and asylum\n                     seeker entrepreneurs overcoming and navigating extraordinary circumstances. We present\n                     a novel analytic mapping tool for researchers and designers to reformat, visualize,\n                     and analyse complex participant journeys.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735778\">From Sociotechnical Gaps to Solutions: Designing AI Tools with Parents to Address\n                  Special Education Advocacy Barriers in IEP Processes<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Ali Zaidi<\/li>\n               <li class=\"nameList Last\">Karrie Karahalios<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Parents of children with disabilities face significant challenges navigating special\n                     education, particularly during Individualized Education Program (IEP) meetings with\n                     school administration, where they must advocate for their child to receive accommodations.\n                     Existing advocacy support methods often exclude families with limited resources. This\n                     work aims to design collaborative systems that mitigate or remove advocacy barriers\n                     and empower parents. Building on research demonstrating AI\u2019s potential in advocacy\n                     and special education, we investigate parents\u2019 interactions with schools, advocacy\n                     workflows, perceptions of technology, and visions for AI-based support. Interviews\n                     and design probes with 14 parents reveal systemic barriers, including information\n                     overload, resource constraints, and inequitable power dynamics with schools. Parents\u2019\n                     feedback suggested infrastructures that foster equitable advocacy through simplifying\n                     information, preparing parents via dialogue, and meeting reflection. This study engages\n                     with a traditionally underrepresented population and explores how AI can reshape special\n                     education advocacy, presenting actionable principles for creating systems focused\n                     on parent empowerment.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735770\">Are We Still Under-Serving the Underserved?: An Analysis of 56 Blue-Collar Workers\n                  Using 2 Online Information Services<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Jinan Y. Azem<\/li>\n               <li class=\"nameList\">Joni Salminen<\/li>\n               <li class=\"nameList\">Kholoud Khalil Aldous<\/li>\n               <li class=\"nameList\">Fatou Gueye<\/li>\n               <li class=\"nameList Last\">Bernard J Jansen<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>We examined the accessibility of online information services (OISs) for underserved\n                     communities through a user study involving 56 blue collar participants interacting\n                     with a website and an app for five tasks. The blue collar participants were generally\n                     unsuccessful on both platforms, with 12.7% (n=7) unable to successfully complete any\n                     tasks; a hundred percent required at least minor assistance. Participants were also\n                     inefficient, taking 28.62 more steps than optimal (143.1%) on the website and 10.41\n                     more steps (47.3%) on the app. Time inefficiency was also noteworthy, with 535.76\n                     more seconds than optimal (248.0%) on the website and 266.55 more seconds (142.4%)\n                     on the app. Though still poor, the app yielded better outcomes with higher success\n                     rates and usability ratings. Digital proficiency correlated with success on both platforms,\n                     which is good news as this is addressable by OIS providers. Qualitative analysis revealed\n                     that many in this underserved population were unaware that these valuable OISs were\n                     available to them. Findings underscore the need for OIS providers to prioritize targeted\n                     outreach to inform underserved communities that OISs are open and welcoming. Designing\n                     OISs with accessibility and simplicity targeted for mobile devices is crucial for\n                     bridging the digital literacy gap and empowering underserved communities to engage\n                     effectively with OISs.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            <h2><a id=\"Multi-Modal\">SESSION: Multi-Modal Interaction Design with Generative AI<\/a><\/h2>\n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735684\">How Does AI Represent Social Concepts? Examining the Visual Representation of Care\n                  in Text-to-Image Tools<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Zixuan Wang<\/li>\n               <li class=\"nameList\">Nichole Fernandez<\/li>\n               <li class=\"nameList Last\">John Vines<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Text-to-image (T2I) generative AI tools like Midjourney are growing in capability\n                     and popularity, promising a wide range of applications. However, concerns are rising\n                     over the biases in how they represent social concepts like care and the lack of guidance\n                     for designers and users to address these in practice. This paper first presents an\n                     analysis of 140 \u201cphotos of care\u201d generated by Midjourney, and then explores how prompting\n                     might influence the results. The findings reveal that AI-generated images reproduce\n                     stereotypical and reductive representations of care by default, neglecting the broad\n                     spectrums of care practices in everyday life. Furthermore, we find that while prompt\n                     engineering might mitigate certain biases, it requires specialised skills, knowledge,\n                     and an ongoing reflexive approach to generate meaningful outputs. We conclude by proposing\n                     a reflexive prompting framework, and discussing the implications for future T2I evaluation\n                     and its responsible use and design.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            <h2><a id=\"Personal_Informatics\">SESSION: Personal Informatics<\/a><\/h2>\n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735420\">Designing for Secondary Users of Intimate Technologies<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Alejandra G\u00f3mez Ortega<\/li>\n               <li class=\"nameList\">Nadia Campo Woytuk<\/li>\n               <li class=\"nameList\">Joo Young Park<\/li>\n               <li class=\"nameList\">Anupriya Tuli<\/li>\n               <li class=\"nameList\">Deepika Yadav<\/li>\n               <li class=\"nameList\">Marianela Ciolfi Felice<\/li>\n               <li class=\"nameList\">Madeline Balaam<\/li>\n               <li class=\"nameList Last\">Airi Lampinen<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>Digital contraceptives are intimate technologies that support their users, and their\n                     partners, in preventing pregnancy. These technologies rely on basal body temperature\n                     data to predict ovulation and calculate a fertile window, where there is a risk of\n                     pregnancy if partners have unprotected sex. Although their use is shared and relational,\n                     these technologies are mainly designed for a primary user \u2014 the person who can become\n                     pregnant. We turn our attention to secondary users of digital contraception (i.e.,\n                     sexual partners), specifically, Natural Cycles. We investigate how secondary users\n                     are designed for and how primary users imagine them to be. We contribute empirical\n                     insights on how secondary users are and are not involved in digital contraception\n                     and conclude with three design proposals describing how digital contraception tools\n                     could be designed to involve secondary users. We discuss how designing for secondary\n                     users of intimate technologies requires balancing their potential as co-users and\n                     adversaries.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735422\">MindEat!: Navigating Screen-Centric Dining through Mindful Technology Design<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Rohit Ashok Khot<\/li>\n               <li class=\"nameList Last\">Jung-Ying (Lois) Yi<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>In an era where technology pervades every aspect of our daily lives, including dining,\n                     we grapple with the challenge of harmonizing its immersive nature with the ethos of\n                     mindful eating. Despite some strides in designing technologies to support mindful\n                     eating, existing efforts remain fragmented and lack a comprehensive grasp of the intricate\n                     factors essential for cultivating such dining experiences. This pictorial introduces\n                     <em>MindEat!<\/em> an inventive design framework tailored for designers embarking on the development\n                     of technologies that support mindful eating experiences. Similar to the layered composition\n                     of a culinary sandwich, each component of this framework encompasses a distinct aspect\n                     of mindful eating, deserving careful consideration throughout the design process.\n                     By emphasizing metaphorical engagement with mindful eating principles, and practical\n                     application in the design process, this framework aims to contribute to the creation\n                     of enjoyable health-promoting solutions that resonate with the realities of screen-centric\n                     dining cultures.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735412\">Making Intimate Technologies Together<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Nadia Campo Woytuk<\/li>\n               <li class=\"nameList\">Mafalda Gamboa<\/li>\n               <li class=\"nameList\">Alejandra G\u00f3mez Ortega<\/li>\n               <li class=\"nameList\">Joo Young Park<\/li>\n               <li class=\"nameList\">Anupriya Tuli<\/li>\n               <li class=\"nameList\">Deirdre Tobin<\/li>\n               <li class=\"nameList\">Fiona Bell<\/li>\n               <li class=\"nameList\">Marianela Ciolfi Felice<\/li>\n               <li class=\"nameList Last\">Madeline Balaam<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>Feminist research highlights the urgent need to challenge the oppressive design of\n                     commercial intimate technologies, particularly how the FemTech industry restricts\n                     access to intimate bodily knowledge through paywalls and proprietary systems. Yet,\n                     for decades, women and marginalized communities have turned to Do-It-Yourself (DIY)\n                     or \u2018hacking\u2019 practices to reclaim control over their own gynecology and intimate health,\n                     addressing gaps often ignored by medical research and healthcare. Inspired by visual\n                     themes from these movements, this pictorial critically explores how designers and\n                     HCI researchers might advance DIY approaches to intimate technologies. We exemplify\n                     this with reflections from a series of workshops on handmade intimate sensors, and\n                     draw out the joyful potential of collaborative making\u2014building alliances, destigmatizing\n                     intimate health, and using craft to subvert gender stereotypes. We discuss matters\n                     of safety when making together and contribute to ongoing work on building feminist\n                     makerspaces.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            <h2><a id=\"Embodied_Interaction\">SESSION: Embodied Interaction<\/a><\/h2>\n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735438\">Queer\/Crip Body Mapping: Expressing Dynamic Bodily Experiences with Data<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Alexandra Teixeira Riggs<\/li>\n               <li class=\"nameList\">Sylvia Janicki<\/li>\n               <li class=\"nameList\">Tim Moesgen<\/li>\n               <li class=\"nameList\">Noura Howell<\/li>\n               <li class=\"nameList Last\">Karen Anne Cochrane<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>Drawing on queer and disability theories alongside tangible body mapping techniques,\n                     we explore alternative ways of mapping embodied experiences and expressing affective\n                     sensations. Our collaborative autoethnographic approach incorporates sensors to trace\n                     our somatic experiences over time, pairing visualizations of contextual biodata with\n                     personal reflections in written or spoken form. We unpack how these alternative approaches\n                     to body mapping support reflecting on, communicating, and deepening understanding\n                     of embodied experiences by foregrounding temporal and situated aspects. We offer expanded\n                     body mapping methods by sharing a plurality of experiences that embrace queer and\n                     crip ways of knowing, foregrounding alternate temporal and spatial representations.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735777\">Temporal Trajectories: Characterizing Somatic Experiences that Unfold Over Time<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Laia Turmo Vidal<\/li>\n               <li class=\"nameList\">Ana Tajadura-Jim\u00e9nez<\/li>\n               <li class=\"nameList Last\">Judith Ley-Flores<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>The body technologies we design profoundly influence our somatic experiences, yet\n                     they are often evaluated through short-term or one-off studies. To design for sustained,\n                     longer-term engagements, we need to understand how somatic experiences evolve when\n                     people repeatedly interact with the same technology over time. With this goal, we\n                     report on two in-the-wild studies of body sonification, one with physically inactive\n                     individuals and another with professional dancers. For one month, participants used\n                     SoniBand, a movement sonification wearable, in their daily lives and shared their\n                     experiences with us through questionnaires and in-depth interviews. Drawing from the\n                     concept of trajectories, we identified four temporal patterns that characterized the\n                     participants\u2019 evolving experience with SoniBand: singular, sustained, deepening, and\n                     meandering. We unpack these temporal trajectories and reflect on the characteristics\n                     that may contribute to their emergence. Our findings offer insights for studying and\n                     designing future technologies that embrace the dynamic, evolving nature of people\u2019s\n                     somatic experiences.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735690\">Situated Artifacts Amplify Engagement in Physical Activity<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Jonas Keppel<\/li>\n               <li class=\"nameList\">Marvin Strauss<\/li>\n               <li class=\"nameList\">Luke Haliburton<\/li>\n               <li class=\"nameList\">Henrike Weing\u00e4rtner<\/li>\n               <li class=\"nameList\">Julia Dominiak<\/li>\n               <li class=\"nameList\">Sarah Faltaous<\/li>\n               <li class=\"nameList\">Uwe Gruenefeld<\/li>\n               <li class=\"nameList\">Sven Mayer<\/li>\n               <li class=\"nameList\">Pawe\u0142 W. Wo\u017aniak<\/li>\n               <li class=\"nameList Last\">Stefan Schneegass<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>In the context of rising sedentary lifestyles, this paper investigates the efficacy\n                     of \u201cSituated Artifacts\u201d in promoting physical activity. We designed two artifacts\n                     that display users\u2019 physical activity data within their homes \u2013 one physical and one\n                     digital. We conducted a 9-week, counterbalanced, within-subject field study with <em>N<\/em> = 24 participants to assess the impact of these artifacts on physical activity, reflection,\n                     and motivation. We collected quantitative data on physical activity and administered\n                     daily and weekly questionnaires, employing individual Likert items and standardized\n                     instruments, as well as conducted interviews post-prototype usage. Our findings indicate\n                     that while both artifacts act as reminders for physical activity, the physical artifact\n                     was superior in terms of user engagement. The study revealed that this can be attributed\n                     to the higher perceived presence and, thereby, enhanced social interaction, which\n                     acts as a motivational source for activity. In this sense, situated artifacts gently\n                     nudge toward sustainable health behavior change.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            <h2><a id=\"Ethics_and_Values\">SESSION: Ethics and Values<\/a><\/h2>\n            \n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735678\">Research as Care: A Reflection on Incorporating the Ethics of Care in Design Research\n                  with People Living with Dementia<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Long-Jing Hsu<\/li>\n               <li class=\"nameList\">Janice Bays<\/li>\n               <li class=\"nameList\">Manasi Swaminathan<\/li>\n               <li class=\"nameList\">Weslie Khoo<\/li>\n               <li class=\"nameList\">Hiroki Sato<\/li>\n               <li class=\"nameList\">Kyrie Jig Amon<\/li>\n               <li class=\"nameList\">Sathvika Dobbala<\/li>\n               <li class=\"nameList\">Min Min Thant<\/li>\n               <li class=\"nameList\">Alex Foster<\/li>\n               <li class=\"nameList\">Kate Tsui<\/li>\n               <li class=\"nameList\">Philip B. Stafford<\/li>\n               <li class=\"nameList\">David Crandall<\/li>\n               <li class=\"nameList Last\">Selma Sabanovic<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>When computing researchers design technologies for vulnerable populations and engage\n                     with them over extended periods, researchers may incorporate &#8220;care&#8221;\u2014deliberate actions\n                     to build and maintain relationships with participants\u2014to improve engagement and deepen\n                     their understanding of situated perspectives. However, when researchers choose to\n                     take actions involving care, these efforts are rarely made explicit. Reflecting on\n                     our three-year project of designing and testing a social robot with 31 participants\n                     living with dementia, we realized the benefit of intentional reflection on the ethics\n                     and practice of care during the research process. We offer &#8220;research as care&#8221; guidelines\n                     into computing design research: 1) viewing participants as individuals, 2) being intentional\n                     in the ongoing and dynamic engagement, 3) acknowledging the reciprocity inherent in\n                     care, 4) reporting care practices transparently, 5) tailoring care to the specific\n                     context, and 6) making an informed choice to incorporate care. By incorporating research\n                     as care, computing design researchers can provide a more productive experience for\n                     participants and enhance their designs\u2019 overall quality and validity.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735776\">Anti-Heroes: A Role-Based Method to Encourage Ethical Deliberation<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Shruthi Sai Chivukula<\/li>\n               <li class=\"nameList\">Shikha Mehta<\/li>\n               <li class=\"nameList\">Colin M. Gray<\/li>\n               <li class=\"nameList Last\">Ritika Gairola<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>HCI and design researchers have designed, adopted, and customized a range of ethics-focused\n                     methods to inscribe values and support ethical decision-making in a design process.\n                     In this paper, we add to this body of resources, constructing a method that surfaces\n                     the designer\u2019s intentions in an action-focused way, encouraging consideration of both\n                     manipulative and value-centered designer roles. <em>Anti-Hero<\/em> is a card deck that allows a designer to playfully take on pairs of manipulative\n                     (Anti-Heroes) and value-centered (Heroes) roles during design ideation\/conceptualization,\n                     evaluation, and ethical dialogue. We illustrate the complexity of our ethics-focused\n                     method creation through a Research through Design (RtD) approach, reflecting on our\n                     iterative design decisions and outcomes from a playtesting evaluation with student\n                     designers. We reflect upon method affordances and performance ambiguities based on\n                     playtesting outcomes, indicating important areas of focus in future ethics-focused\n                     method creation and evaluation.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735783\">Exploring Legal Journeys in Family Justice Systems: Towards Relational Design Approaches\n                  to Advance Access to Justice for Domestic Abuse Survivors<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Clara Crivellaro<\/li>\n               <li class=\"nameList\">Imane El Hakimi<\/li>\n               <li class=\"nameList\">Rima Hussein<\/li>\n               <li class=\"nameList Last\">Rachel Clarke<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>Access to justice includes mechanisms enabling people to have their voice heard, exercise\n                     their rights, and hold decision-makers accountable. This paper reports on an exploratory\n                     study aiming to understand Domestic Abuse (DA) survivors\u2019 experiences of legal journeys\n                     through Family Court (FC) and Family Justice Systems (FJS) in England and Wales, and\n                     the potential for digital technologies to support their access to justice. We used\n                     qualitative methods including interviews and designed prompts to engage eight DA survivors\n                     and three Family Court professionals. Designed prompts enabled discussions and articulation\n                     of perceptions of socio-technical systems\u2019 potential to support access to justice\n                     in FJS. Our findings describe challenges faced by survivors when accessing FJS, participating\n                     in proceedings, and living with outcomes stemming from Family Courts processes. We\n                     discuss opportunities for digital interventions in these contexts and provide design\n                     orientations for relational approaches to design research seeking to advance access\n                     to justice for DA survivors across legal jurisdictions.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            <h2><a id=\"Health_Wellbeing_1\">SESSION: Health and Wellbeing 1<\/a><\/h2>\n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735419\">Lull: Designing Crip Pacing Technologies for Rest<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Sarah Homewood<\/li>\n               <li class=\"nameList\">Nantia Koulidou<\/li>\n               <li class=\"nameList\">Claudia A Hinkle<\/li>\n               <li class=\"nameList\">Irene Kaklopoulou<\/li>\n               <li class=\"nameList Last\">Harvey Bewley<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>Energy limiting conditions (ELC), such as long COVID and ME\/CFS, require the careful\n                     monitoring and pacing of activity and rest to avoid over-exertion. Commercially available\n                     fitness tracking technologies are currently being \u201cmisused\u201d to manage these conditions.\n                     Based on co-design research with people with ELC, we conducted a research-through-design\n                     process to ideate upon what ELC pacing technologies could be. Our ongoing design process\n                     is informed by crip theories that highlight the social and political, rather than\n                     medical, aspects of disability and chronic conditions. In an attempt to offer non-medicalising\n                     pacing technologies, we explored integrating bronze casting as a jewelry making technique\n                     within the prototyping process. We also explore how we can present quantitative pacing\n                     data gathered from wearable sensors through felt vibrations on the body in a way that\n                     can be therapeutic and allow for the user to calibrate the quantitative data with\n                     their own felt sense of fatigue.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735674\">General Practitioners\u2019 Perspectives on a Pre-Consultation Chatbot for Shared Decision-Making<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Mana Samiee<\/li>\n               <li class=\"nameList\">Joel Wester<\/li>\n               <li class=\"nameList\">Rune M\u00f8berg Jacobsen<\/li>\n               <li class=\"nameList\">Michael Skovdal Rathleff<\/li>\n               <li class=\"nameList Last\">Niels van Berkel<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>General practitioner (GP) consultations are the typical starting point for a patient\u2019s\n                     healthcare journey. Here, GPs aim to support and inform patients to enable a shared\n                     decision-making process. In this work we explore how an interactive chatbot, designed\n                     to prepare patients for their GP consultation, is perceived by GPs to impact patient\n                     consultations, patient-GP interaction, and their work. We conducted an in-depth evaluation\n                     and interview with 15 GPs from 12 different practices. Our findings provide insights\n                     into common challenges in shared decision-making, GP perspectives on the role of chatbots\n                     in preparing patients, and how chatbot technology could impact and transform general\n                     practice. Finally, we reflect on patient and GP agency in shared decision-making and\n                     the impact of technology on this complex relationship.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735779\">The Benefits and Risks of LLMs for Facilitating Medical Decision-Making Among Laypersons<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Charisse Foo<\/li>\n               <li class=\"nameList\">Pin Sym Foong<\/li>\n               <li class=\"nameList\">Camille Nadal<\/li>\n               <li class=\"nameList\">Natasha Ureyang<\/li>\n               <li class=\"nameList\">Thant Naylin<\/li>\n               <li class=\"nameList Last\">Gerald Choon Huat Koh<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>We explored the potential of Large Language Models (LLMs) to facilitate laypersons\u2019\n                     selection of treatment goals within a complex medical decision-making context. Using\n                     ChatGPT-4o, we developed an LLM-enhanced tool to guide users through goal elicitation,\n                     clarification, and revision. Our findings demonstrate that LLM features can effectively\n                     support these key aspects of decision-making. However, the absence of human interaction,\n                     the lack of patient- and context-specific treatment information, and the risk of information\n                     overload due to unconstrained access to LLM-generated content present significant\n                     risks. To balance the benefits and risks, we propose that LLM-enhanced facilitation\n                     tools for asynchronous, independent use should be clinician-initiated, constrain broad\n                     information search, and focus on creating a safe space for the exploration of laypersons\u2019\n                     preferences and goals regarding the difficult challenges in balancing treatment and\n                     tradeoffs for quality of life.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            <h2><a id=\"User_Experience_in_Specific_Contexts\">SESSION: User Experience in Specific Contexts<\/a><\/h2>\n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735417\">Facing the Limits: Designing Data Physicalizations to Reduce Water Consumption in\n                  Mountain Huts<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Eleonora Mencarini<\/li>\n               <li class=\"nameList\">Paolo Massa<\/li>\n               <li class=\"nameList\">Chiara Leonardi<\/li>\n               <li class=\"nameList Last\">Gaia De Donatis<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>Mountain huts are buildings in remote mountain areas that depend on local water sources,\n                     such as snow, rain, and springs, as they are not connected to centralized water systems.\n                     In this pictorial, we report the design process undertaken to explore how data physicalization\n                     can communicate the problem of water scarcity in mountain huts with the ultimate goal\n                     of encouraging visitors to reduce water usage. The process led to two concepts: one\n                     that materializes the impact of each visitor on the water reserve of the hut through\n                     a participatory installation, and the other that invites visitors to explore the concept\n                     of limit, encouraging reflection on what they are willing to renounce and helping\n                     them to make informed choices within tight water constraints. With our work, we aim\n                     to contribute to the ongoing efforts in Sustainable HCI to shift the purpose of behavior\n                     change from personal gain to the common good.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            <h2><a id=\"Cultural_Heritage\">SESSION: Cultural Heritage<\/a><\/h2>\n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735428\">The Grounded Experience: The Effect of Floor Design Typologies on Human Behavioral\n                  and Cognitive Experience<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Burcu Nimet Dumlu<\/li>\n               <li class=\"nameList\">Takatoshi Yoshida<\/li>\n               <li class=\"nameList\">Tatsuya Saito<\/li>\n               <li class=\"nameList\">Kiyotaka Tani<\/li>\n               <li class=\"nameList\">Akane Yamaguchi<\/li>\n               <li class=\"nameList\">Keita Aono<\/li>\n               <li class=\"nameList Last\">Kouta Minamizawa<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>Design reflects the human tendency to adapt to and inhabit surroundings, with architectural\n                     decisions significantly shaping behavioral and cognitive experiences. This pictorial\n                     focuses on the floor as a primary, embodied interface in space. To explore its influence,\n                     five floor design typologies (completing, switching, zoning, stimulating, and bending)\n                     were identified through twenty hours of expert discussions involving architects, designers,\n                     an artist, an engineer, and researchers. A collaborative workshop further defined\n                     sub-categories via participatory observations. Fieldwork then informed site selection\n                     for an observational study, which confirmed the behavioral and cognitive impacts of\n                     the identified typologies. Based on these findings, floor codes were developed by\n                     shifting the design focus from visual cues to somatic sensations and applied in design\n                     scenarios. This research contributes to understanding human experience in architectural\n                     environments. It offers insights for virtual architecture, proposing evidence-based\n                     strategies for designing personalized and interactive spaces in virtual and mixed-reality\n                     contexts.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735787\">From Temporal to Spatial: Designing Spatialized Interactions with Segmented-audios\n                  in Immersive Environments for Active Engagement with Performing Arts Intangible Cultural\n                  Heritage<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Yuqi Wang<\/li>\n               <li class=\"nameList\">Sirui Wang<\/li>\n               <li class=\"nameList\">Shiman Zhang<\/li>\n               <li class=\"nameList\">Kexue Fu<\/li>\n               <li class=\"nameList\">Michelle Lui<\/li>\n               <li class=\"nameList Last\">Ray Lc<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Performance artforms like Peking opera face transmission challenges due to the extensive\n                     passive listening required to understand their nuance. To create engaging forms of\n                     experiencing auditory Intangible Cultural Heritage (ICH), we designed a spatial interaction-based\n                     segmented-audio (SISA) Virtual Reality system that transforms passive ICH experiences\n                     into active ones. We undertook: (1) a co-design workshop with seven stakeholders to\n                     establish design requirements, (2) prototyping with five participants to validate\n                     design elements, and (3) user testing with 16 participants exploring Peking Opera.\n                     We designed transformations of temporal music into spatial interactions by cutting\n                     sounds into short audio segments, applying t-SNE algorithm to cluster audio segments\n                     spatially. Users navigate through these sounds by their similarity in audio property.\n                     Analysis revealed two distinct interaction patterns (Progressive and Adaptive), and\n                     demonstrated SISA\u2019s efficacy in facilitating active auditory ICH engagement. Our work\n                     illuminates the design process for enriching traditional performance artform using\n                     spatially-tuned forms of listening.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735772\">To Cuddle, Mingle, Venture, or Guide: How Architectural Affordances Influence the\n                  Experience of Social VR Places<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Jihae Han<\/li>\n               <li class=\"nameList\">Yu Sun<\/li>\n               <li class=\"nameList\">Sophia Ppali<\/li>\n               <li class=\"nameList\">Alexandra Covaci<\/li>\n               <li class=\"nameList Last\">Andrew Vande Moere<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Social virtual reality (VR) encompasses a growing network of three-dimensional virtual\n                     worlds where users interact in a shared, embodied way. While research has focused\n                     on the social interactions between the users themselves, less is known about how the\n                     design of virtual spaces influences these interactions. Our study combines interviews\n                     with 15 social VR users logging over 1,000 hours and a 20-hour spatial protocol of\n                     a purposeful sampling of VR worlds. We analysed how spatial characteristics (including\n                     proportion, sightlines, materiality, atmosphere, and navigation) influence meaningful\n                     user interaction to turn space into place. We synthesised four place types for a new\n                     social VR typology: Cuddle worlds that encourage cosy conversations; Mingle worlds\n                     that facilitate new encounters; Venture worlds that promote exploration; and Guided\n                     worlds that elicit a sense of belonging with the online community. By relating architectural\n                     affordances to social patterns, we contribute insights towards the purposeful design\n                     of social VR places.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            <h2><a id=\"Health_and_Wellbeing_2\">SESSION: Health and Wellbeing 2<\/a><\/h2>\n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735442\">Blowfish Band: A Wearable Inflatable Fidget for Self-Stimulatory (Stim) Behaviors<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList Last\">Elena Sabinson<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>This pictorial presents an autobiographical design  study of the <em>Blowfish Band<\/em>, a spiky, inflatable sensory  wearable I created and wore as an autistic designer-\n                     researcher. Through photography and reflection, I  explore how manual inflation and\n                     tactile interaction  supported unmasking, emotional regulation, and  reduced harmful\n                     stimming behaviors such as skin- picking. Worn across a range of real-world settings,\n                     the band offered a way to reclaim agency over sensory  needs, often suppressed to\n                     conform to social norms.  By documenting changes to my skin and reflecting on  the\n                     ethical tensions of designing for behavior change,  I examine how interaction for\n                     self-use can center  personal autonomy. This work contributes a first-person  perspective\n                     on neurodivergent stim experiences with  interactive products and proposes that designing\n                     for  one&#8217;s own sensory needs can reveal new possibilities  for ethical behavior change\n                     in HCI. The pictorial  also offers design considerations for wearability,  multi-sensory\n                     interaction, and manual control to  support self-directed, sensory-focused engagement.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735768\">pawH: Colorimetric pH-Sensing Toys for Non-Invasive Pet Health Monitoring<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Shuyi Sun<\/li>\n               <li class=\"nameList\">Yuan-Hao Ku<\/li>\n               <li class=\"nameList Last\">Katia Vega<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Colorimetric biosensors open up possibilities for analyte detection in pet saliva,\n                     relevant for various conditions such as bacterial infections and periodontal disease.\n                     This paper introduces pawH, colorimetric biosensors embedded in toy form factors for\n                     non-invasive pet health monitoring by displaying pH through color changes. We aim\n                     to provide easily available and non-invasive access to information typically obtained\n                     through veterinary labs. We present the fabrication process of two pet-safe toys:\n                     braided toy and ball. The toys\u2019 color changes are analyzed by a portable spectrometer,\n                     while our app converts the color readouts to estimated pH values. We performed technical\n                     evaluations for color-change response, biosensor concentration, and biosensor reusability.\n                     A user study with 11 pairs of human and canine participants was conducted to evaluate\n                     usability and pet suitability. By using pets\u2019 natural play behaviors, the toys come\n                     into contact with pet saliva, allowing for convenient and non-invasive monitoring\n                     of biochemical health in familiar settings and offering a platform for HCI implementations.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735699\">\u201cMy Happiness Makes You Smile\u201d: Beginning to Understand Telepathic Superpower Design\n                  Via Brain-Muscle Interfaces<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Siyi Liu<\/li>\n               <li class=\"nameList\">Barrett Ens<\/li>\n               <li class=\"nameList\">Nathan Arthur Semertzidis<\/li>\n               <li class=\"nameList\">Gun A. Lee<\/li>\n               <li class=\"nameList\">Florian \u2018Floyd\u2019 Mueller<\/li>\n               <li class=\"nameList Last\">Don Samitha Elvitigala<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Designing superpowers in Human-Computer Interaction (HCI), often inspired by science\n                     fiction, has garnered increased attention. However, it is important to ask whether\n                     such superpower designs might have inherent negative side effects, especially considering\n                     that technological advances allow going beyond short demos to integrate these superpowers\n                     into everyday life. To understand the positive and negative side effects of superpower\n                     design, we created \u201cEmoPals\u201d and studied it in everyday life. EmoPals is a novel system\n                     inspired by telepathy, where one user\u2019s emotions are detected through a brain-computer\n                     interface and replicated on the other user\u2019s face through electrical muscle stimulation,\n                     therefore one user\u2019s happiness makes the other smile and vice versa. A 5-day field\n                     study with 12 participants suggests that EmoPals can strengthen emotional connections\n                     and facilitate empathy, however, it also highlights the negative side effects of amplifying\n                     negative emotions and social discomfort. We propose five design recommendations for\n                     designing superpowers that account for negative side effects. Ultimately, we aim to\n                     deepen our understanding of superpower design for everyday life.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            <h2><a id=\"Playfulness_and_Engagement\">SESSION: Playfulness and Engagement<a\/><\/h2>\n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735764\">&#8220;Pray For Green, Play For Green&#8221;: Integrating Religion into Climate Change Serious\n                  Games<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Sai Siddartha Maram<\/li>\n               <li class=\"nameList\">Yash Malegaonkar<\/li>\n               <li class=\"nameList\">Niveditha Dudyala<\/li>\n               <li class=\"nameList\">M\u00e1rio Escarce Junior<\/li>\n               <li class=\"nameList Last\">Magy Seif El-Nasr<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Effective climate change games must recognize the unique relations various communities\n                     have with nature. Further, these serious games must factor how climate change diversely\n                     impacts different communities. This paper explores the use of religious narratives\n                     and rituals in serious games to communicate the impact of climate change within faith-based\n                     communities. The study examines whether integrating religion into serious games can\n                     help individuals within these faith-based communities reflect on their connection\n                     to the environment and increase their interest in climate change. We present Shloka,\n                     a serious game that incorporates Hindu rituals and narratives, to demonstrate how\n                     integrating religious elements can amplify situational interest, deepen engagement,\n                     and provoke thoughtful reflection on climate change within these communities.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735694\">User Experience, Attitude towards Replay and Play Endings \u2013 a semi-Situated Study\n                  of an Interactive Play Space<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Danica Mast<\/li>\n               <li class=\"nameList\">Joost Broekens<\/li>\n               <li class=\"nameList\">Sanne Irene de Vries<\/li>\n               <li class=\"nameList Last\">Fons J. Verbeek<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>Interactive Play Spaces can support positive behaviour. Play endings, user experience\n                     (UX), and replay intention, can play an important role to achieve this. However, the\n                     relation between these aspects is underexplored. We explore how different types of\n                     endings\u2013<em>open, closed positive (winning), and closed negative (losing)<\/em>\u2013affect user experience and attitude towards replay in a semi-situated study with\n                     93 adults in a science center. While assigned ending conditions did not significantly\n                     influence reported experience, many participants, significantly in the open-ended\n                     condition, <em>perceived<\/em> their assigned ending condition differently. Analysis on these <em>self-reported<\/em> endings revealed that players who experienced a closed negative ending reported higher\n                     <em>Stimulation (UEQ).<\/em> Additionally, user experience dimensions (<em>Attractiveness, Dependability, Stimulation<\/em>) and <em>Positive Affect (I-PANAS-SF)<\/em> positively related to <em>attitude towards replay<\/em>. These findings provide insights into the relation between play endings, UX and attitude\n                     towards replay, and highlight the importance of inquiring about experienced experimental\n                     conditions in user research.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735788\">Rubikon: Intelligent Tutoring for Rubik&#8217;s Cube Learning Through AR-enabled Physical\n                  Task Reconfiguration<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Haocheng Ren<\/li>\n               <li class=\"nameList\">Muzhe Wu<\/li>\n               <li class=\"nameList\">Gregory Thomas Croisdale<\/li>\n               <li class=\"nameList\">Anhong Guo<\/li>\n               <li class=\"nameList Last\">Xu Wang<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Learning to solve a Rubik\u2019s Cube requires the learners to repeatedly practice a skill\n                     component, e.g., identifying a misplaced square and putting it back. However, for\n                     3D physical tasks such as this, generating sufficient repeated practice opportunities\n                     for learners can be challenging, in part because it is difficult for novices to reconfigure\n                     the physical object to specific states. We propose Rubikon, an intelligent tutoring\n                     system for learning to solve the Rubik\u2019s Cube. Rubikon reduces the necessity for repeated\n                     manual configurations of the Rubik\u2019s Cube without compromising the tactile experience\n                     of handling a physical cube. The foundational design of Rubikon is an AR setup, where\n                     learners manipulate a physical cube while seeing an AR-rendered cube on a display.\n                     Rubikon automatically generates configurations of the Rubik\u2019s Cube to target learners\u2019\n                     weaknesses and help them exercise diverse knowledge components. In a between-subjects\n                     experiment, we showed that Rubikon learners scored 25% higher on a post-test compared\n                     to baselines.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735773\">Partnership through Play: Investigating How Long-Distance Couples Use Digital Games\n                  to Facilitate Intimacy<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Nisha Devasia<\/li>\n               <li class=\"nameList\">Adrian Rodriguez<\/li>\n               <li class=\"nameList\">Logan Tuttle<\/li>\n               <li class=\"nameList Last\">Julie A. Kientz<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Long-distance relationships (LDRs) have become more common in the last few decades,\n                     primarily among young adults pursuing educational or employment opportunities. A common\n                     way for couples in LDRs to spend time together is by playing multiplayer video games,\n                     which are often a shared hobby and therefore a preferred joint activity. However,\n                     games are relatively understudied in the context of relational maintenance for LDRs.\n                     In this work, we used a mixed-methods approach to collect data on the experiences\n                     of 13 couples in LDRs who frequently play games together. We investigated different\n                     values around various game mechanics and modalities and found significant differences\n                     in couple play styles, and also detail how couples appropriate game mechanics to express\n                     affection to each other virtually. We also created prototypes and design implications\n                     based on couples\u2019 needs surrounding the lack of physical sensation and memorabilia\n                     storage in most popular games.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735791\">Exploring the Role of Interactive Technology to Enrich Surfing<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Maria F. Montoya<\/li>\n               <li class=\"nameList\">Aryan Saini<\/li>\n               <li class=\"nameList\">Sarah Jane Pell<\/li>\n               <li class=\"nameList\">Phoebe O. Toups Dugas<\/li>\n               <li class=\"nameList Last\">Florian \u2018Floyd\u2019 Mueller<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Surfing is not just a sport; it is a playful water activity rich in culture. Prior\n                     interaction design work to support surfers has mostly focused on improving performance;\n                     yet emphasizing performance misses the experiential value of play and enjoyment, which\n                     is under-investigated. To explore this opportunity, we engaged in a soma design process\n                     resulting in two prototypes, an actuating wearable top and an octopus-inspired soft\n                     robot, aiming to facilitate a playful experience to enrich surfing. We conducted an\n                     exploratory study with eight surfers in a swimming pool (acknowledging safety but\n                     also limitations). Through thematic analysis of interviews, we found three themes\n                     that supported the idea that the design features of the prototypes have the potential\n                     to enrich surfing. Moreover, adopting a postphenomenological lens, we investigate\n                     the Human-Technology-Water relations to understand the role of interactive technology\n                     during surfing and propose five design strategies that researchers can consider to\n                     develop future designs for the experiential aspects of surfing.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            <h2><a id =\"Work_and_Productivity\">SESSION: Work and Productivity<\/a><\/h2>\n            \n            \n            \n            \n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735799\">The Office Awakens: Building a Mobile Desk for an Adaptive Workspace with RolliDesk<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Julia Dominiak<\/li>\n               <li class=\"nameList\">Anna Walczak<\/li>\n               <li class=\"nameList\">Adam Jan Sa\u0142ata<\/li>\n               <li class=\"nameList\">Andrzej Romanowski<\/li>\n               <li class=\"nameList Last\">Pawe\u0142 W. Wo\u017aniak<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Over the past century, office desks have evolved with technological advancements,\n                     yet they have largely overlooked individual user preferences and diverse body types.\n                     Traditionally, desks remain static objects, forcing users to adapt their workspaces\n                     around them. This research explores how mobile desks can offer a more flexible and\n                     adaptive solution. We developed RolliDesk, a mobile desk capable of automatically\n                     moving within the workspace. Our open-source desk kit enables researchers to make\n                     desks mobile using off-the-shelf electronics and 3D printing. In a mixed-methods study\n                     (<em>n<\/em> = 21), we compared three desk configurations: manually controlled via a crank, control\n                     panel-operated, and automatically adaptive. Participants found the manual desk creepy,\n                     while the automatic desk was considered the most useful, particularly for promoting\n                     healthier office habits. This paper contributes RolliDesk\u2019s design and practical insights\n                     for advancing reconfigurable and adaptive workstations.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735792\">&#8216;Stick to&#8217; Three: Fostering Awareness, Intentions, and Reflections on the Top Daily\n                  Tasks<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Andre N Meyer<\/li>\n               <li class=\"nameList\">Nimra Ahmed<\/li>\n               <li class=\"nameList\">Isabelle Cuber<\/li>\n               <li class=\"nameList\">Sebastian Richner<\/li>\n               <li class=\"nameList\">Elaine M. Huang<\/li>\n               <li class=\"nameList\">Gail Murphy<\/li>\n               <li class=\"nameList Last\">Thomas Fritz<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>Knowledge workers face increasing challenges in managing numerous digital tasks, often\n                     leading to long task lists that distract from completing the most important ones.\n                     We present <em>AIRbar<\/em>, a task management tool designed to enhance <em>A<\/em>wareness, <em>I<\/em>ntention, and <em>R<\/em>etrospection (<em>AIR<\/em>) in daily task management. <em>AIRbar<\/em> prompts workers to prioritize a maximum of three daily tasks, displays them in an\n                     always-on glanceable widget, and facilitates end-of-day reflection to improve task\n                     completion and self-awareness. In a 4-week field study with 35 participants, we found\n                     that <em>AIRbar<\/em> increased task completion rates, improved focus and motivation, and positively influenced\n                     perceptions of work processes. These findings suggest that limiting the number of\n                     tasks and ensuring continuous visibility of priorities can address key challenges\n                     in modern task management, providing actionable insights for designing future task\n                     management systems.<\/p>\n               <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735767\">Pati &amp; Bellio: Coordinating Face-to-Face Interruptions via Availability Expressions\n                  and Proximal Notifications in Open-Plan Offices<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Nari Kim<\/li>\n               <li class=\"nameList\">Nanum Kim<\/li>\n               <li class=\"nameList\">Jin-young Moon<\/li>\n               <li class=\"nameList\">Jaha Lim<\/li>\n               <li class=\"nameList Last\">Young-Woo Park<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \n                  <p>In open-plan offices, face-to-face (F2F) interruptions frequently occur to facilitate\n                     collaboration and cooperation with colleagues. We designed Pati &amp; Bellio to support\n                     the coordination of F2F interruptions in open-plan offices. Pati is a partition-style\n                     personal device that visualizes availability for F2F interruptions and Bellio is a\n                     shared interface that allows users to send notifications to their colleagues from\n                     a distance. Our three-week in field study with four groups of participants reveals\n                     that examining the process of F2F interruptions helps determine the importance of\n                     the interruption, and the physical distance provided by Pati and Bellio naturally\n                     allowed time to prepare for conversations. We also identified how the visualized availability\n                     is considered after an interruption begins. Our findings imply considerations in designing\n                     systems to support coordinating social interaction in work environments.<\/p>\n                  <\/div>\n            <\/div>\n            \n            \n            \n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3715336.3735679\">Gemini at Work: Knowledge Workers&#8217; Perceptions and Assessment of Productivity Gains<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Na Sun<\/li>\n               <li class=\"nameList Last\">Donald Kalar<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  <p>The rise of Generative AI (GenAI) presents a paradigm shift in knowledge work. To\n                     examine its impact on productivity, we conducted seven focus groups (n=37) with employees\n                     across diverse job functions in an enterprise setting, where workers engaged with\n                     a large language model (LLM) embedded in a suite of productivity applications. Our\n                     study identified four categories of GenAI-facilitated work activities: information\n                     management, content generation, problem solving, and communication and collaboration.\n                     These findings offer a grounded framework of GenAI-enabled practices for both researchers\n                     and practitioners, while also surfacing key challenges in realizing GenAI\u2019s promised\n                     productivity gains. Beyond conventional metrics like time savings or output volume,\n                     participants attributed productivity improvements to time redistribution, enhanced\n                     decision-making, and reduced reliance on traditional intermediaries. We contribute\n                     actionable insights for designing GenAI systems that support context-aware productivity\n                     in the evolving landscape of knowledge work.<\/p>\n               <\/div>\n            <\/div>\n            \n            <\/div>\n      <\/div>\n","protected":false},"excerpt":{"rendered":"<p>Paper Sessions DIS &#8217;25: Proceedings of the 2025 ACM Designing Interactive Systems Conference Full Citation in the ACM Digital Library SESSION: Extended Reality Sensing Nature Tianyuan Zhang Wei Lin Dingye Zhang Xueni Pan William Latham Katie Grayson Zillah Watson Marco Fyfe Pietro Gillies The rise of urbanisation has reduced connection with nature and physical interaction, [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-505","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/dis.acm.org\/2025\/wp-json\/wp\/v2\/pages\/505","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dis.acm.org\/2025\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/dis.acm.org\/2025\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/dis.acm.org\/2025\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/dis.acm.org\/2025\/wp-json\/wp\/v2\/comments?post=505"}],"version-history":[{"count":18,"href":"https:\/\/dis.acm.org\/2025\/wp-json\/wp\/v2\/pages\/505\/revisions"}],"predecessor-version":[{"id":527,"href":"https:\/\/dis.acm.org\/2025\/wp-json\/wp\/v2\/pages\/505\/revisions\/527"}],"wp:attachment":[{"href":"https:\/\/dis.acm.org\/2025\/wp-json\/wp\/v2\/media?parent=505"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}