iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.1145/3613904.3642176
Exploring Mobile Devices as Haptic Interfaces for Mixed Reality | Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems skip to main content
10.1145/3613904.3642176acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open access

Exploring Mobile Devices as Haptic Interfaces for Mixed Reality

Published: 11 May 2024 Publication History

Abstract

Dedicated handheld controllers facilitate haptic experiences of virtual objects in mixed reality (MR). However, as mobile MR becomes more prevalent, we observe the emergence of controller-free MR interactions. To retain immersive haptic experiences, we explore the use of mobile devices as a substitute for specialised MR controller. In an exploratory gesture elicitation study (n = 18), we examined users’ (1) intuitive hand gestures performed with prospective mobile devices and (2) preferences for real-time haptic feedback when exploring haptic object properties. Our results reveal three haptic exploration modes for the mobile device, as an object, hand substitute, or as an additional tool, and emphasise the benefits of incorporating the device’s unique physical features into the object interaction. This work expands the design possibilities using mobile devices for tangible object interaction, guiding the future design of mobile devices for haptic MR experiences.
Figure 1:
Figure 1: A selection of haptic hand movements from our user-derived gesture set involving a mobile device. The gesture design illustrates the distinct positions a mobile device can adopt: (1) object mode, assuming a physical representation of the object; (2) hand mode, representing the user’s hand; and (3) tool mode, maintaining its original manifestation for indirect interaction.

1 Introduction

Until today, immersive mixed reality (MR) systems have been primarily confined to homes or other dedicated spaces, typically offering users the choice between controllers or hand tracking for their individual needs and experiences. Recent changes in the MR domain have made mobile settings increasingly prevalent, facilitating, e.g. remote collaboration in distant locations [14], on-site urban city planning [9], or mobile offices on trains [31], and in future self-driving cars [41]. This growing emphasis on MR mobility highlights the impracticality of carrying separate controllers alongside head-mounted displays (HMDs), emphasising controller-free MR interactions. However, while an HMD’s integrated hand-tracking feature reduces the hardware burden and allows high-fidelity user input, the absence of a tangible input device limits haptic explorations and immersion. This introduces the challenge of maintaining immersive haptic experiences within controller-free MR.
Previous research has been exploring different approaches to substitute dedicated handheld controllers with more ubiquitous devices for MR interactions by integrating, e.g. everyday objects [8], smartwatches [47] and touch-based gestures [44, 49] or 3D motion gestures with a mobile device [12, 20, 30].Given their widespread accessibility and advanced sensing capabilities via touch screens and internal measurement units (IMUs), mobile devices in particular hold promising potential for facilitating haptic MR interactions. Further, incorporating an HMD’s hand tracking feature can expand the sensing capabilities without adding hardware to the mobile device, enabling the detection of hand and finger movements in contact with the nowadays device’s non-sensing parts (e.g., overall shape, back and sides and immediate space around the device). Utilising mobile devices with extended sensing capabilities as an input device opens up the traditional design space of touch screen gestures more towards novel device-gestures for touching and haptically experiencing virtual objects in MR. Moreover, using mobile devices to provide haptic feedback for MR facilitates accessibility and mobility of future haptic technology. It is, therefore, important to understand how people haptically explore objects in the physical world and incorporate relevant aspects into the gesture design.
Therefore, to reintroduce haptic experiences into controller-free MR systems, we explore the utilisation of mobile devices as a replacement for specialised MR controllers. Our research methodology comprises two user studies. An initial online survey (n = 50) empirically defines the object candidate set that manifests six common haptic object properties, including size, shape, texture, hardness, temperature, and weight [24, 25, 26], which we use in a follow-up step when investigating haptic exploration. There, we conducted an in-depth gesture elicitation study (n = 18) to investigate (1) user-defined hand gestures performed with prospective mobile devices and assess (2) users’ preferences for real-time haptic feedback for exploring haptic object properties in MR.
By synthesising our findings into a design space, we outline ten main gesture types, including grip + move, pinch, stroke, press, grip, two-handed grip, unsupported holding, tap, hover and wearing gesture for virtual object exploration. Furthermore, we provide a variety of related haptic sensations voiced by our participants, including e.g., force feedback, surface deformation, weight shift, shape change, and many more. Moreover, variations of gestures within our design space illustrate three distinct modes that the device can adopt including (1) object mode, (2) hand mode and (3) tool mode (see Figure 1). We discuss our results and their implications for using the mobile device as an input tool for haptically exploring virtual objects in MR. In summary, this paper makes the following contributions:
(1)
It contributes a set of user-defined exploration gestures performed with a mobile device and with hands only in MR that extract specific haptic properties of virtual objects.
(2)
It offers a comprehensive design space that links haptic attributes of virtual objects with user-defined gestures and with the desired real-time haptic feedback.
(3)
Finally, our paper provides an in-depth understanding of how users adapt natural exploratory procedures to MR object interaction using mobile-device-input gestures.
This work broadens the scope of haptic interactions possible in MR with prospective mobile devices. By doing so, we hope that our design space fosters a knowledge-driven approach to integrating user-defined mobile device gestures that seamlessly align with user-defined haptic feedback. Moreover, our findings can guide the design of mobile devices, enrich their input and haptic output capabilities and shape future haptic experiences in mobile MR.

2 Related Work

We review previous research on how users typically interact in MR environments, as well as existing research that has investigated the use of mobile devices for providing haptic feedback to users.

2.1 Gesture-Based MR Interaction for Mobile Devices

While the visual and auditory capabilities of MR systems continue to advance [21, 61], the primary mode of interacting with virtual objects still relies on handheld controllers with traditional button types and with vibration as limited haptic feedback. Research proposed various novel systems with diverse form factors that extend the quality and range of possible inputs and feedback, such as handheld and wearable props, encounter-type interfaces, and the use of ultrasound for providing haptic rendering to mid-air gestures (see [7, 35, 39, 53] for detailed reviews). However, with the prevalence of standalone MR headsets shifting the interaction with virtual and augmented objects towards more mobile settings, these devices impose several limitations, including their bulkiness, the requirements for grounding, or simply being cumbersome to carry around. Consequently, research began to explore and advance the capabilities of ubiquitous devices we already carry with us for input sensing and feedback in MR, including smartphones, tablets, or smartwatches. Interactions often go beyond the known touch-based gestures and employ dexterous device-gestures. For example, prior works revealed the capacity of smartphones to sense various squeezing gestures [57] and hand postures [6, 15], 3D manipulations of the phone [56], as well as the orientation [52] and pressure [4] of individual fingers, illustrating the capabilities of existing mobile devices for sensing and simulating haptic experiences in MR.
The potential of ubiquitous, everyday devices such as smartphones to serve as substitutes for tracked controllers has been also investigated in both augmented and virtual reality settings. Previous work employed a wristband-like form factor and attached a mobile phone on users’ arms to provide visual display and enable spatial tracking of the hand [37].Samimi and Teather [44] improved menu selection in virtual reality (VR) using a smartphone’s multi-finger touch feature. Work by Surale et al. [49] explored the design space regarding the use of a tablet’s screen for 3D object manipulation in VR. Dong et al. [12] investigated the use of surface and 3D motion gestures with smartphones for input in AR, and Kari and Holz [20] recently showed the capacity of smartphones for effective bimanual input for VR in mobile or space-constrained settings. However, transitioning the interaction from a 3D virtual or augmented environment to touchscreen devices with distinct characteristics necessitates adaption. The question of how users would adjust their active object exploration to these devices and how they would like the devices to react and feel remains open.

2.2 Haptics for Mobile Devices

Haptic rendering capabilities of commercial mobile devices are limited to vibrations, that are traditionally employed for notifications [38, 42] or haptic augmentation of mobile game effects [50].To enhance the interaction through more elaborated vibrations, a variety of easy-to-understand meaningful vibration patterns were designed [55] or a more usable and secure user authentication on an Apple Ipod Touch was investigated [2]. Previous work has also explored a variety of haptic mechanisms to equip these small and thin devices with more diverse haptic sensations, such as augmenting the device’s flat surfaces and edges. For example, Jang et al. augmented a mobile device’s edge with an actuated pin array that can change in terms of shape, size, and location to provide additional physical affordances during the interaction Jang et al. [19]. A simpler approach was used by Maiero et al. [29] , who integrated one movable pin into a custom 3D-printed case to explore force feedback at the back of a smartphone and augment the touch interactions on the front screen. Another approach of increasing the interactivity of the mobile phone’s edge incorporated button that was being operated with the thumb and that provided lateral skin stretch for interaction such as scrolling Luk et al. [28]. More recent work by Shultz and Harrison [45] pushed the dimensions of interactive surfaces and proposed a novel miniature shape-changing display that augments flat surfaces with rising and retracting bumps. Their flat panel haptic display is capable of simulating pop-up interfaces, providing pixelated displays, or being used beneath thin visual displays, such as smartphones, to create a deformable flat, pop-out, or sucked-in keyboards.
The exploration of altering the overall shape was thoroughly examined using various technologies.A deformation of the device’s corners was proposed by Rasmussen et al. [40]. Their flexible and shape-changing artefact "ReFlex" allows users to directly control the shape of a smartphone-like object. Their handheld prototype is equipped with servomotors at each corner, enabling it to lift both its corners. While Dimitriadis and Alexander [11] also bent the corners of their mobile phone prototype, they employed a rigid form factor that comprised movable parts. Moving the top cover upwards, e.g., allows the device to expand its volume.The usage of servo motors and pulse sensations in different work allowed the adaptation of the device’s physical appearance to simulate permanent tactile life-like actuation, e.g., breathing of the phone [17]. Later work of the same author attached a mobile phone to a box containing a wobbling plate to provide haptic sensations of tapering for, e.g. notifications [18]. Building upon the same concept of life-like devices, various materials were explored to animate the overall shape of a handheld mobile device to enable it to curl or crawl, mimicking evocative life-like gestures [10].
While prior research has introduced a range of haptic interfaces in mobile devices, their capacity for haptic feedback is often limited to a single sensation, e.g. perceiving changes in the overall shape or deformations on the surface. To deliver a more comprehensive haptic experience of virtual objects in MR, it is crucial to generate diverse stimuli, encompassing factors like the object’s temperature, weight or hardness. Such multi-haptic interfaces provide multiple simultaneous haptic sensations in a system. This approach allows for a more diverse range of haptic cues and has been employed in previous haptics research for multi-sensory experiences, e.g. [1, 27]. Understanding distinct haptic feedbacks users associate with an object’s haptic attributes is, therefore, part of this work’s goal.

2.3 Summary

In summary, previous research proposed various interaction methods with virtual objects for mixed reality. While our review highlights research efforts to expand input and feedback mechanisms, including handheld devices, wearable devices, and ultrasound for mid-air haptic rendering, MR controllers with vibration feedback are still the primary mode of interaction in MR. The move towards mobile, standalone MR experiences in public settings has prompted research into utilising everyday devices such as smartphones or wristbands for input sensing and haptic rendering. However, to date, there is limited research on how users can, and more importantly desire to, explore virtual objects and their properties in MR. As reviewed in Section 2.2, there is a plethora of research that investigated novel prototypes for haptic interactions and how existing form factors (e.g., those of smartphones) can be enhanced. However, what is missing is a comprehensive exploration of the interaction and design space of mobile devices when used for gestural object exploration in MR. Additionally, understanding how users’ needs and preferences influence the future design of mobile devices for haptic exploration in MR, especially when used in MR [16, 20], remains an important element to address.
Table 1:
 Size Shape Complexity Texture
 smalllarge easycomplex smoothrough
   
Objectpaperclipumbrella toothpickglove ricesandpaper
Mean (SD)0.03 (0.05)0.61 (0.11) 0.05 (0.06)0.75 (0.18) 0.11 (0.05)0.81 (0.33)
         
         
 Hardness Temperature Weight
 softhard coldwarm lightheavy
   
Objecttoilet paperclamp keycandle balloonhammer
Mean (SD)0.08 (0.01)0.90 (0.05) 0.16 (0.10)0.77 (0.14) 0.03 (0.07)0.66 (0.18)
Table 1: The final 12 objects from Klatzky et al. [22]’s initial set of 100 objects that we pre-selected based on participants’ ratings of the objects’ properties to cover as many manifestations as possible, i.e., both ends of size (small, large), shape complexity (low, high), texture (smooth, rough), hardness (low, high), temperature (low, high), and weight (light, heavy). The mean and SD are expressed with normalised values between 0 and 1.

3 Study 1: Online Survey

Our online survey aims to understand people’s expectations about a 3D object’s haptic properties based on visual information. Given the subjectivity of the haptic expectations that are mainly derived from the object’s graphical appearance to compensate for the absence of dexterous exploration, we employed an empirical approach to balance subjective perspectives and to capture a more objective and generalisable haptic understanding. Participants were asked to rate 100 virtual objects according to their expectations of the six haptic properties that we derived from previous work [22]. Lederman & Klatzky studied in a series of haptic exploration experiments what haptic knowledge people extract through touch [24, 25, 26] and identified six haptic attributes that comprehensively capture an object’s tactility: size, shape, texture, stiffness, temperature, and weight. The sum of all presented 100 virtual objects covers a range of these six haptic properties, contributing towards the generalisability of our results.
To present participants with visual representations of the 100 objects, we first collected 3D objects on various 3D modelling platforms, including cgtrader.com , sketchfab.com , and free3d.com . We were searching for publicly available 3D models and for 3D models that exhibit a visual fidelity that is not too abstract (e.g., a pixelated red sphere to represent an apple) and not too photorealistic (e.g., a highly realistic photocopy of an apple) to ensure we can re-use the 3D models later in our MR-based gesture elicitation study. We also aimed for a similar visual fidelity of all objects to keep it consistent across all 100 candidate objects, contributing towards internal validity. We then captured the original preview images as 2D copies of the 3D objects for the online survey, and stored the original 3D objects for later use in our gesture elicitation study.

3.1 Study Design and Participants

After preparing the study material, we deployed a within-subjects online survey to understand people’s perceptions of the six haptic properties of each candidate object. These properties include size, shape, texture, hardness, temperature, and weight, which cover the six most important characteristics when identifying objects [22]. Each property manifests two characteristics on the level of each property. For example, for the property size, participants could rate on a continuous 0-to-100 scale from small (0) to large (100) using a slider. We then calculated means to identify two objects each at both ends of each property continuum. For example, if one object received the lowest mean score across all objects for size, we considered this object to be a suitable candidate for representing a “small object” in our follow-up gesture elicitation study. The same was done for the highest mean score, representing a suitable candidate for, e.g. a “large object”.
Fifty participants n = 50 (age ranging from 20 to 59, M = 28.65, SD = 8.41) were recruited using Prolific, an online crowd-sourcing platform that has found widespread application for empirical HCI research, e.g. [33, 36]. The study was distributed evenly to male and female participants using Prolific’s internal distribution feature. We added three attention-check questions to ensure data quality and excluded survey takers who clearly did not put in significant effort in filling out the survey. To further contribute towards high data quality, we added a 5-point Likert Scale asking how genuinely the participants filled in the survey. This was done after four initial pilot tests. We emphasised that the answer to this question has no impact on their reward. Overall, the online survey was scheduled for 30 minutes and eventually manifested a median of 34.35 minutes. None of the survey takers had to be excluded for the data analysis based on the attention checks, and the participants’ commitment was M = 4.89 (SD = 0.31). Participants received an average reward of £6,01/hr using Prolific’s reimbursement system.

3.2 Procedure and Ethics

After an initial screening by the ethics board of our University of St. Gallen, our research was exempt from a formal full review. All participants signed up for the study directly on Prolific and were then forwarded to our survey, which was hosted on SoSci Survey (https://www.soscisurvey.de/), a professional survey platform that allows designing and hosting surveys. Participants then provided informed consent of their participation and went through an example object that introduced them to their task (i.e., “In this survey, you will see images of objects and rate their properties. A short example at the beginning will illustrate the task.”). After going through the example, participants were asked to indicate their expectations of the size, shape, texture, temperature, weight, and stiffness of 100 objects using sliders from 0 to 100, as described in Section 3.1.

3.3 Results: Pre-Selection of Object Candidates

To identify objects that cover both ends of the category continuum (e.g., small size and large size), we normalised the dataset and calculated means and standard deviations (SDs) of participants’ indicated expectations of all six haptic properties of each object. This process identified the 12 objects that cover all properties (i.e., size, shape, texture, stiffness, temperature, weight) and all characteristics (e.g., small size or large size). For example, for size, the paper clip object was perceived as the smallest object and the umbrella object as the largest object out of all 100 objects by Klatzky et al. [22]. Table 1 shows the identified objects at both ends of the individual continua, including the means and the SDs from participants’ ratings. Overall, all participants rated their expected quality of a specific haptic attribute consistently in the online survey. In particular, while the visually displayed object sizes in the 2D copies are not in a realistic proportion, all size ratings across objects and participants show a low variability among the individual SDs with an average of 0.09 (SD = 0.06). This indicates that participants did not rate an object’s size as visually displayed in the 2D copy, but the expected actual object size, as intended by the study task.
As a next step, we then integrated the 12 objects in our haptic exploration study to investigate how people would elicit haptic sensations when interacting with them in MR. While our objects cover various object categories and properties (e.g., low temperature and high temperate) and are informed by previous user-centred object recognition research [22], we want to emphasise that our candidates do not represent a non-exhaustive list of possible MR objects; however, we aimed at identifying objects that cover a broad spectrum of characteristics in an initial step to then investigate possible related gestures for exploration, which we study in more depth in our gesture elicitation study.

4 Study 2: Eliciting Gestures and Haptic Feedback

Our gesture elicitation study aimed to understand MR users’ haptic interactions using a mobile device for exploring virtual objects through their haptic senses. In particular, we explored hand movements, gestures and the type of haptic feedback they envision when instructed to inspect a specific haptic attribute.

4.1 Study Design & Task

We employed an exploratory study design with a within-subject design and a thinking-aloud protocol, similar to previous gesture elicitation studies [5, 12, 54]. This approach allowed us to capture participants’ intuitive hand movements when presented with a haptic exploration task involving a mobile device. In addition, participants verbally expressed ideas and explanations regarding their envisioned haptic sensations providing a deeper understanding of the meaningful haptic renderings for future mobile MR.
During the study task, participants observed inside an MR HMD a virtual object placed within the AR passthrough (see Figure 2 (a)). We asked participants to invent and perform a gesture, or multiple ones, involving a super-sensing artefact to explore a specific haptic attribute. The super-sensing artefact served as a physical representation of a future mobile device for MR and was incorporated into participants’ interactions and imaginations for haptic feedback. Based on the selection from the online survey, 12 objects in total were presented, with a counterbalanced order. Both objects representing the min and max variant of a specific haptic attribute were always shown in a consecutive and counterbalanced order. The process of gesture invention was guided by predetermined and ad-hoc questions ensuring a deeper understanding of participants’ verbally expressed design choices and consisted of two steps:
Step 1: Natural object exploration. We asked participants to visually inspect a particular haptic attribute, e.g. the texture of the virtual object in front of them and describe their expected haptic sensation upon touch: By just looking at the object, what would you expect the [haptic attribute] to feel? We then instructed them to use their hands and explore the haptic attribute to confirm their previously described expectation: What would you do with your hand to confirm your just mentioned expectation?, Can you explain what you are doing and why? and What would you expect to feel with your hand while performing the interaction? The purpose of this first step was to support participants in establishing an understanding of the involved haptic sensations and signals derived through haptic touch exploration. Additionally, the performed hand movements served as a starting point for the subsequent gesture adaption.
Step 2: Object exploration through tangible artefact interaction. Participants were asked to design an interaction with the super-sensing artefact that allowed them to experience the haptic sensation they explored in step 1. We guided participants to voice their thoughts by asking the same questions as in step 1 while adding a question specific to the artefact interaction: Can you explain how and why you adapted your hand interaction for the device?

4.2 Procedure and Ethics

After an initial screening by the ethics board of our University of St. Gallen, our research was exempt from a formal full review. Participants took part in the study voluntarily and did not receive monetary compensation. The study was conducted at two semi-public co-working spaces (see Figure 2 (b) for one of the spaces). At the beginning of the study, participants took a seat at a desk and were informed about the study’s purpose. Participants provided their consent and completed a demographic questionnaire. The study task was then explained to them and the mobile super-sensing artefact was introduced. The device was characterised as having limitless capabilities in sensing and providing haptic feedback, being capable of processing any imaginative request from participants. To ensure a shared understanding across all participants, we provided the following exemplary capabilities: the ability to detect touch, touch location, pressure, orientation, spatial position, and key pressed on physical buttons. After addressing any remaining questions, participants were equipped with the Meta Quest Pro HMD with enabled passthrough. The super-sensing artefact was positioned on the table, easily reachable for participants for step 2 of the gesture invention process. The study conductor then guided participants through the exploration of all 12 virtual objects, asking both predetermined questions and additional inquiries in response to participants’ descriptions. Upon completing all objects, the study concluded with a short semi-structured interview. The study lasted on average around 1 hour.
Figure 2:
Figure 2: (a) Participants’ view inside the MR HMD displaying the virtual components (object and tracked hands) in front of the AR passthrough layer (background, artefact and participants’ arms). (top row) Exploring the haptic features of the virtual toothpick or balloon with hands. (bottom row) Exploring the haptic features of the virtual toothpick or balloon with the tangible artefact. (b) Setup of our gesture elicitation study in a semi-public co-working space.

4.3 Apparatus & Data Collection

The Meta Quest Pro and its AR feature were used for displaying the virtual objects in front of the AR passthrough (see Figure 2 (a)). The HMD’s hand-tracking feature additionally displayed users’ hands. The study application was implemented in Unity 2022.3.4f1 and the HMD was connected via cable to the computer during the study. A Google Pixel 7, turned off, was used as the super-sensing artefact.
To capture participants’ hand movements, we employed a two-camera setup. The first camera was placed on a tripod and recorded from an overhead and right-side perspective. The second camera was on a small tripod on the table and focused on tracking finger movements on the back of the artefact. Nevertheless, videos from the second camera were omitted from the data analysis, given that the recordings from the first camera, in conjunction with participants’ comments, yielded ample information for our purposes. Alongside the video recordings, a wireless RØDE microphone captured participants’ expressed comments. Since the camera’s audio recording was of sufficient quality, we did not use the microphone’s data for the analysis. The consent and demographic questionnaire was hosted on SoSci Survey (https://www.soscisurvey.de/), which participants filled out before the study, either on their private device or a laptop provided by the study conductor.

4.4 Participants

Due to technical issues, one data set was excluded from the analysis reported in Section 5. Therefore, we will only provide the demographics of our final sample of 17 participants. All participants were recruited using word-of-mouth (6 female, 11 male) and they were aged between 23 and 32 (M=27.47, SD=2.48). All participants were right-handed (n=17) and seven had corrected vision using glasses or contact lenses. When asked for prior VR experience, most participants engaged with VR once or twice in total (n=6), or several times per year (n=7). One participant never used VR before, one used VR several times per month, one used VR several times per week, and one used VR daily. The AR experience was similar, with seven participants each having engaged in AR experiences once or twice in total or several times per year. The remaining three participants use AR several times per month (n = 2) or had never used it before (n = 1).

5 Results: Eliciting Gestures and Haptic Feedback

The video recordings were transcribed and analysed by performing an iterative process of open coding, using the MaxQDA data analysis software, and thematic analysis [3]. Three researchers performed an initial round of open coding on two data sets, covering 11.76% of the overall data (2 out of 17). With an additional researcher, four researchers in total completed the remaining data sets for the construction of the design space.
Table 2:
Table 2: In our elicitation study, participants proposed ten types of gestures and eleven types of haptic feedback for exploring the six haptic attributes. The attribute’s min variant is indicated by, the max variant by. The types of gestures and types of haptic feedback are both sorted by frequency.
Table 3:
Table 3: Our design space for interacting with a mobile device as a haptic interface for object exploration in MR. The table depicts participants’ invented gesture types alongside the corresponding envisioned haptic feedback for each object attribute. The haptic attribute’s min variant is indicated by, the max variant by.
Gestures. We identified over 200 suggested individual gestures across the data sets (17 participants × at least one gesture for each of the twelve objects), from which we derived ten high-level categories of gestures (see Table 2, top). In total, these gesture categories were employed to explore a specific haptic attribute in 31 instances. While Grip + Move was employed for the highest number of min and max attribute variants, Stroke was the most versatile, as it was applied to explore all six haptic attributes. Tap, Hover, and Wearing Gesture are situated at the lower end of the frequency spectrum, each associated with only a single haptic attribute. The idea of the wearing gesture was to push the fingers through the device to achieve isolated finger placement similar to wearing a glove. A detailed overview of the individual gestures in relation to the objects is illustrated in Figure 3, demonstrating the broad and heterogeneous design space of gesture interaction with a mobile phone as a haptic interface. Nonetheless, we identified the mobile phone’s relative positioning to the virtual object as a varying factor among gesture categories. One variation shows the phone often portraying the object itself while participants manually explored areas of interest of the phone (see e.g., Pinch the paper clip for the size in Figure 3). In another variation, participants held the phone inside their palm and moved it alongside their hand around the object (see e.g., Press the balloon down for the weight in Figure 3). Lastly, in some instances, the object was explored by tapping it with the roughly grasped phone (see e.g., Grip+Move of the paperclip or umbrella for the size in Figure 3). Based on these observed positional relationships between the phone, the hand and the object, we derived three modes that the phone can assume as a crucial aspect of the gesture design: (1) object mode, (2) hand mode, and (3) tool mode (see Figure 1).
Overall, during the study, participants did not comment on the two rendering layers (virtual, physical) inside MR or their virtual hands and objects occluding the tangible artefact. Participants accurately described most haptic features of the objects in the study, with only two deviations noted. The umbrella was perceived as smaller than real-life, sometimes likened to a children’s umbrella, while the paper clip was seen as larger than its real-life counterpart. However, as these size variations remained within a narrow margin and still depicted realistic versions of the objects, we believe the gesture elicitation was unaffected.
Haptic Feedback. For real-time feedback to haptically experience the object properties during the gesture interaction, participants envisioned eleven distinct types of haptic feedback (see Table 2, bottom). These haptic outputs were suggested in a total of 24 combinations with the object properties. While no feedback mechanism was consistently applied across all six attributes, force feedback/resistance was envisioned to be the most versatile, being associated with five attributes. This was followed by passive haptics through device features and surface deformation, both of which were linked to four attributes. The remaining concepts were specific to either two or one haptic attribute.
Gestures x Haptic Feedback. An overview of the relationship between both interaction parameters, the gesture types and the corresponding haptic feedback, is presented in Table 3. Here, Grip + Move was combined by far with the most types of haptic feedback and max and min variants, reaching a total of 17 combinations. The second-highest level of versatility is demonstrated by Stroke, which encompasses nine combinations. The remaining connections among gesture category, haptic feedback and attribute variant were at or below five. As an additional visualisation, Figure 4 in the appendix depicts the data flow within this complex design space, constrained by the parameters haptic attribute, attribute variant, gesture category, and haptic feedback. In the following, we discuss in more detail the interactions with the individual haptic attributes.
Figure 3:
Figure 3: The user-defined gesture set as part of our design space derived from our elicitation study to explore the six haptic attributes. The individual gestures within the gesture categories vary across the haptic attributes and are complemented by the illustrated corresponding object as well as arrows to visualise hand and artefact movement. If gestures were utilised to explore both min and max variants, one representative object was selected for illustration.

5.1 Size

Participants’ design of gestures for exploring an object’s size was strongly impacted by an object’s large or small dimensions. Often, the large object was spatially "sampled" through motion gesture, and the small object was mapped onto the 2D surface of the artefact. When exploring the large object, the artefact was often used as a poking tool and moved through space to detect the object’s outline. Envisioned haptic feedback upon collision included force feedback, resisting arm movement, or vibration. A variant incorporating both hands "hugged" the object’s outline and involved force feedback at the second hand, which was not holding the artefact or feeling the umbrella’s round outline along the arms. The relative distance between both arms also provided a proprioceptive sensation for the object’s large dimensions. Another way to sense the size of the umbrella involved grasping the artefact as a substitute for the umbrella’s handle and moving it up and down. Participants then described the size sensation as the vertical resistive force coming from the air resistance. A concrete idea for implementing such vertical force feedback utilised magnetic repelling force between the grasping hand and an envisioned second magnet positioned at the top of the virtual umbrella.
In contrast to gestures heavily relying on spatial movement, participants mainly mapped the small virtual paperclip onto the artefact’s 2D surface, focusing the interaction on the exploration of the flat surface. The paperclip’s 2D outline was then examined through a pinching movement, squeezing the artefact’s front and back, guided by resistive force upon outline collision. For a more in-depth exploration, finger-pads either stroked or were pressed onto the surface to sense the size through envisioned surface deformation or additionally attached wire. Nonetheless, the paperclip’s outline was also experienced through the artefact resting inside the palm, with the size outline pressing into the palm’s skin via surface deformation on the artefact’s back. A motion gesture, similar to the exploration of the large umbrella, was also employed for the paperclip. In this context, the grip was restricted to the artefact’s corner to enable a smaller grip pose appropriate for the object while also providing passive haptic feedback from the physical feature.

5.2 Shape

Haptic knowledge about shape was mainly retrieved through gestures that allowed participants to enclose their hands or fingers around the physical shape of the artefact, serving as a substitute for the virtual object. The simple and straight shape of the toothpick was often mapped onto the edges or flat surface of the artefact. To experience the toothpick’s elongated form, participants commonly pinched either the device’s long edge from top to bottom or rotated the device by 90 degrees to pinch the short edge of the device. To capture the sensation of the toothpick’s sharp ends, a wire was suggested that would come out of both ends of the artefact’s edges. Alternatively, a deformation was described that mimics both cone-shaped ends. The toothpick’s thin diameter was investigated by pinching the round edge, providing passive haptic feedback similar to the object’s original shape. Additional rolling of both fingers then mimicked the natural rolling of the toothpick between the fingers. It was also suggested to incorporate one of the physical buttons on the side instead of the artefact’s whole edge, as the button’s shape is closer to the toothpick’s. In scenarios, in which participants mapped the toothpick onto the flat surface, the index and thumb were in contact with the artefact’s front and back side, feeling the shape through surface deformation recreating the toothpick’s thin diameter. The same surface deformation just on the front side was also explained as a meaningful simulation of the toothpick’s shape, which was then felt through stroking.
For the complex shape of the baseball glove, both hands were often used to grip the artefact from opposing sides. This approach was complemented by a desired shape-changing feature, aiming at enhancing the ability to perceive the object’s volume within both palms. Furthermore, participants applied pressure to the artefact, bending it to gain insights into the shape’s flexibility, conveyed by haptic feedback simulating the level of artefact deformation. Participants were also curious about the glove’s opening for the hand and suggested a gesture for pushing individual fingers through openings in the artefact. "Wearing" the artefact as a glove in this way would then restrict the individual fingers’ movements and convey the arched shape.

5.3 Hardness

Central concepts in the device interaction for exploring the level of hardness involved squeezing and pushing the object, with the primary gestures Grip, Two-handed grip, Pinch and Press (see Figure 3). Grip gestures, one-handed and two-handed, squeezed the artefact’s width and aimed for haptic feedback related to soft or firm device deformation. Additionally, two-handed grip gestures were suggested to bend the artefact and convey a sense of hardness by responding with a break-resistance sensation. Another variant of the two-handed squeezing compressed the virtual object between the user’s second hand and the hand holding the artefact. Participants described a resistive force sensation that would then restrict both hands from approaching each other further and convey a level of softness. In the context of motion gestures, participants gripped the artefact again as a poking tool to push into the virtual object and sense its hardness upon collision through resistive force feedback. Here, participants exhibited different preferences for hand poses while holding the device.
Participants’ investigations of hardness on an object’s local scale were often conducted through a Pinch gesture. During the interaction, participants focused, e.g. on the concept of compressing the thin and pliable sheet of toilet paper. To convey the sheet’s flexibility and deformation, sensing the pushback of the opposing finger was highlighted as an integral component of the tactile encounter, which was complemented by material deformation and ripping sensation. The expected sensation when pressing the artefact’s flat surface with a single finger mostly targeted the feeling of a change in the device material’s softness. Furthermore, proposed gestures utilising the unique tactile qualities of the artefact involved pressing a physical button on the artefact’s side with envisioned adaptive button resistance. Additional ideas for higher hardness involved stroking the edges of the device or its camera component (considerably sticking out from the used Google Pixel 7) to perceive the tactile hardness of the co-located virtual clamp’s edge.

5.4 Texture

The texture was mainly explored through Pinch and Stroke for which the virtual object was mapped onto the artefact’s surface or other physical features. Notably, these gestures targeted a rather small area of the artefact, highlighting a more local haptic exploration.
Participants frequently engaged in stroke gestures, executing long lateral motions using their index, thumb, or multiple fingers, primarily to explore scratchy surfaces. Their expectations centred around experiencing vibration and surface deformation to convey the granularity of the sandpaper’s texture effectively. Moreover, a unique idea emerged, involving placing an index finger on the back of the device and virtually touching the object. In this scenario, the index finger would sense the device’s scratchiness upon collision. In the case of the sandpaper, Some participants used the device as a tool for scratching the object’s surface, combining grip and move gestures to create desired vibrations and force feedback. Participants also employed pinch gestures, often combined with rolling or stroking motions, to explore the texture. The rice grain was sometimes mapped onto the edge or physical button, to take advantage of their smooth surface as well as of the round shape of the edge and size of the button, which was described to be similar to the rice grain. Pinching the artefact’s flat front and back side was more common for the sandpaper, as participants then expected both sides to respond with different texture sensations, rough on top and smooth on the bottom. Moreover, some participants even suggested substituting parts of the device with small squares of fabric to provide the appropriate sensation.

5.5 Weight

The prevalent gestures for experiencing weight through vertical force feedback were grip and rocking, forming a large portion of participants’ interactions (see Figure 3). Various techniques for holding the artefact were exhibited, including gripping the device and moving it up and down, gripping and rotating the wrist, holding the thumb onto the screen or pushing a physical button to simulate grasping the string attached to a balloon and moving the device downward. Additionally, a touch-based gesture involved stroking downwards, pulling a virtual string attached to a balloon, which was mapped onto the screen. This gesture allowed participants to perceive the sensation of vertical upward force through their thumb, adding to the diversity of interactions explored.
In a contrasting approach, some participants opted for unsupported holding of the device, effectively using it as a scale and moving it up and down to engage with the vertical force feedback. Similarly, one participant rested the device on a table and placed their palm on it, expecting an upward push from the device to simulate the negative weight of the balloons. For interactions involving the hammer, participants primarily grasped the handle, focusing on sensing the imbalanced weight distribution. An interesting concept was also introduced, wherein a second hand was used to create a magnetic repelling force, pushing down on the hand holding the device to simulate a unique haptic experience.

5.6 Temperature

Participants primarily utilised Hover and Tap to sense the temperature of the virtual objects. In the context of Hover, the radiating heat of the burning candle was mapped onto the artefact and explored by hovering one or two hands above the device. We observed different variations in the artefact’s orientation. In one scenario, the device was placed on a table, emanating the heat from its front side, while in another, the device was held upright, with heat radiating from the top section to mimic the flame. An additional airflow coming out of the top complimented the haptic sensation of the hot flame.
In contrast to hovering, some participants interacted with hot and cold temperatures through contact with the entire artefact by gripping, tapping, pressing, or resting the device in the palm. For the cold key, participants projected the key’s shape onto the flat surface as the only cold area. To provide localised feedback, temperature panels were suggested that are incorporated into the artefact. Another contact-based gesture aimed to replicate the sensation of the highest temperature at the artefact’s top by pinching its upper edge. One participant suggested tapping the flashlight of the artefact as the source of the heat. The lower sections of the artefact’s surface represented the colder wax, which participants explored through stroking and press movements. Additionally, surface deformation was mentioned to simulate the varying surface tensions of wax at different temperatures.

6 Discussion

The set of user-defined gestures derived from our user study exhibited an extensive variety (see Table 3 and  Figure 3), complemented by a diverse range of haptic sensations (see Table 2). Overall, all gestures can be grouped under ten high-level type of gestures Grip, Grip + Move, Two-handed grip, Pinch, Stroke, Tap, Press, Hover, Unsupported holding and Wearing Gesture. As a real-time response during the interaction, eleven types of haptic feedback were suggested. In the following, we discuss the impact of haptic attributes on gesture design, the relation between gesture and haptic feedback and explore the idea of mobile multi-haptic feedback devices.

6.1 Gesture Design and Three Interaction Modes for the Mobile Device

A key design aspect across all gestures is the variation in the positioning of the mobile device relative to the virtual object and the user’s hand. Based on our observations, we identified three distinct modes that the device can adopt during the interaction: (1) object mode, (2) hand mode, and (3) tool mode (see  Figure 1). In the following, we explore each of these roles and examine the impact of object properties on these gesture variations.
Object mode. In the object mode, the device acts as a physical representation of the virtual object. Here, touching the device is interpreted as touching the virtual object. The device remains stationary in front of the user, with the hand holding it. Participants described the object as either co-located with the handheld device or remaining in the augmented environment. In our study, we further observed three variations of the object mode. First, when the virtual object is smaller than the mobile device, the gesture utilises only a part of the device. For instance, when the small paper clip or cold key was mapped onto a partial area of the flat surface, the simple shape of the toothpick was aligned with the device’s edge or the rice’s texture was explored by touching the device’s similarly shaped physical button. The remaining physical components of the device do not serve as interactive elements in the artefact-gesture. Second, when the virtual object is larger than the device, the device depicts only a part of the object that is relevant to the explored haptic attribute. For example, gripping the mobile device in landscape orientation with the entire hand was associated with gripping the middle part of the clamp positioned on the side. Third, the mobile device depicts only the haptic attribute itself and does not portray the object. For example, the tapping of the mobile phone’s flashlight as a heat source of the candle. Here, the phone only conveys the temperature, but not the candle. All three variations of the object mode are not always clearly distinguishable, and sometimes, more than one can be observed. For example, one could argue that the device did not take on the clamp’s shape during the Grip gesture and, therefore, did not accurately assume the part of the object but rather the sensation of the hard material. However, the positioning of the hand mirrors the one that would be used on the virtual clamp. Therefore, we consider the device representing the clamp’s middle part to be more dominant in the gesture design than its representation of the isolated hardness. While the third variation of the object mode of isolating a specific haptic attribute might be influenced by our study task, it provides a valuable approach to simplifying a comprehensive simulation of object properties and conveying only the haptic property most relevant for the MR experience.
Hand mode. In the hand mode, the object takes a physical representation of the user’s hand. As the device is cradled in the open palm with fingers slightly curled to stabilise it, this mode can also be interpreted as extending the inside of the hand and enabling its sensing capabilities towards the virtual object. The envisioned haptic feedback mainly is restricted to the palm. During the object interaction, the device hand moves around the space, approaches the object and establishes physical contact with its outer shape. For example, the softness of the toilet paper was tested through a two-handed grip, with one hand holding the device and pressing against the paper roll. Other examples included using Grip + Move to press down the lightweight balloon or to "sample" the outer boundaries of the large umbrella and feel the collisions inside the palm.
Tool mode. In the tool mode, the device is used as a tool for indirect interaction with the object. The spatial movement of the device during interaction is similar to that in the hand mode and elicits haptic feedback upon colliding with the outer boundaries of the object. However, the device is held more with a harder grip and the haptic sensation focuses strongly on proprioceptive feedback such as force feedback/resistance in mid-air. For example, when poking the clamp with gripping + moving, the collision expressed the hardness of the clamp or the large dimensions of the umbrella.
Form factors of the device in the gesture design. In some instances, participants held the device either in portrait or landscape orientation while performing the same type of gesture, such as during the Pinch gesture to explore the size of the paper clip or to trace the shape of the toothpick along the short or long device edge. However, in these interactions, the device’s orientation appeared to be more of a personal preference rather than a deliberate design decision. Therefore, allowing different device orientations would ensure coverage for users’ individual preferences. The recent reintroduction of foldable mobile phones and tablets in the consumer market, however, will likely impact participants’ choices in gesture design. Unfolded mobile devices offer larger dimensions and an extended flat surface, likely prompting more two-handed types of gestures for larger objects. In contrast, when folded, the devices become compact and small, potentially encouraging gestures that prioritize one-handed gestures and exploration of small and fine-grained haptic properties. To understand the impact of a foldable form factor on gesture design, further research is needed.

6.2 Multi-Haptic Feedback in Mobile Devices

To cover the variety of haptic features involved in the object’s tactile perception, participants suggested various hardware mechanisms and concepts. Overall, they can be broadly categorised into active feedback, passive haptics (also established through shape-change or appendages), arm poses for proprioceptive sensation and adding a second mobile device. Vibration as the nowadays most commonly used haptic feedback in consumer MR systems was suggested for only two attributes (size and texture). Going beyond vibration, a plethora of previous works have already explored different approaches to equip an artefact with form factors of a mobile device with enhanced haptic rendering capabilities, as reviewed in Section 2.2. As exhibited by recent work from Shultz and Harrison, a shape-changing display can already render bumps and holes [45], and further advancements might be able to achieve a more fine-grained deformation of the device’s surface to create a local shape outline of a key or to render distinct lines representing the individual wires of a paperclip. An existing method that could create sharper elevations and edges on the small form factor of a mobile device is a pin-array [19]. By repositioning the pins to the mobile device’s edges, users could experience the sharpness of, e.g. a virtual toothpick during a pinch gesture. Other mechanisms could simulate a change in the device’s global shape [40] or the temperature gradient of a virtual candle through heat panels [58]. However, incorporating this variety of hardware components will be highly complex and increase the device’s weight, which goes against the vision of everyday, mobile MR experiences. Considering scalable solutions, a flexible setup that allows users to add or remove elaborated haptic rendering capabilities could be in the form of haptic mobile cases, as done in the work by Maiero et al. [29]. Furthermore, software-based concepts known from VR inducing pseudo-haptic effects such as the manipulation of the control-display ratio [43, 48] or hand-redirection [59] could be explored for haptic MR interaction. Manipulating the movement gain when moving the device up and down could convey a sense of weight, or redirecting arm movements while hugging a large object could induce a proprioceptive sensation of different sizes. However, in contrast to VR, where users only see the manipulated hands, MR displays users’ physical hands, making pseudo-haptic illusions challenging. A solution might be blink-suppressed redirecting [60] combined with manipulating the video passthrough, to shift users’ displayed arms unnoticed during a blink. Overall, future work should explore the design of mobile devices as a multi-haptic interface to enable haptic experiences in mobile MR. While software-based pseudo-haptic approaches offer more flexibility and scalability, their application and perceptual mechanisms in MR have to be further researched.

6.3 Multi-Haptic Experiences for Mobile MR

As we singled out individual haptic attributes for eliciting exploration gestures in our study, we gained an in-depth understanding of the core hand movements and haptic sensations. However, our derived gestures and haptic renderings could be combined and integrated into a multi-haptic sensation to achieve even more realistic and comprehensive haptic experiences of virtual objects. The wish to not only sense one single haptic property was indeed expressed by a small number of participants in our study. For example, weight was reported as a desired indicator for the size of the umbrella or that simulating the metal texture for the hard surface of the clamp could improve the hardness perception. To understand meaningful and appropriate combinations of attributes, gestures and haptic feedback, we can leverage our identified relations between one type of gesture and multiple haptic attributes, illustrated in Table 3. In the following, we illustrate the use of our design space using three concrete applications.

6.3.1 Experiencing Augmented Buttons Through Physical Device Features.

Our gesture set shows, that a pinching gesture is suitable for exploring a small size, a simple shape, a hot temperature, a rough texture and both levels of hardness. In this example, the multi-feedback rendering could then include force feedback/resistance, passive haptics through device features, surface deformation and contact temperature. A potential use case scenario might focus on expanding the device’s interaction space, similar to previous work utilising the space around a smartwatch for off-device gestural interaction [23]. For example, the device’s edges and corners could be augmented and extended with virtual 3D buttons or sliders. This would allow the experience of haptic button sensations, varying in resistance or temperature, to communicate the significance of an action associated with the button. This concept also facilitates new ways to engage with digital content, departing from the device’s conventional means of interaction.

6.3.2 Experiencing Virtual Objects Resting on the Palm.

An unsupported holding gesture (see Figure 3) could allow to simulate a small and light object with low temperatures (size-min, weight-max and temperature-min relating to Unsupported holding in Table 3) via force feedback/resistance, surface deformation, and contact temperature (types of haptic feedback relating to Unsupported holding in Table 3). This haptic description is, e.g., applicable for handling smaller fruits or vegetables including tomatoes or apples, which might be encountered in MR grocery shopping experiences [46].

6.3.3 Grasping Large Objects with Two Hands.

Larger objects whose size is explored through a Two-handed grip could exhibit a simple or complex shape and soft or hard characteristics when applying a combined rendering of force feedback/resistance, shape-change and force feedback/resistance for the second hand. A potential application scenario might involve the haptic simulation of a virtual 3D binder, connected to the mobile device in a MR mobile office. As the virtual binder fills or empties, changes in the device’s shape offer users a tactile perception of the binder’s content volume.
Overall, when designing multi-haptic experiences, an important consideration, that was mentioned by a few participants, addresses the need to maintain a safe grip on the device during the gesture to prevent the device from slipping out of users’ hands. Lastly, a few participants expressed that they would also sometimes prefer visual feedback over haptic exploration. Depending on the use case, a visual inspection of an object’s size can already suffice and be more time-efficient than manual haptic exploration. However, other attributes, such as texture, depend more on the retrieval of haptic information through manual exploration, which might not be possible using visual inspection only.
In summary, our research and design space carries significant practical implications for the development of future mobile phones as haptic interfaces and user-defined gestures for haptic exploration in mixed reality. As new iterations of mixed reality technology continue to emerge, we advocate for future research to explore how ubiquitous devices such as mobile phones can be designed to support the prospective use case of serving as an MR controller and as physical substitutes for virtual objects. In particular, variations of a phone’s outer shape need to be explored to cover a wider spectrum of object features, haptic attributes and gestures. While the current slim design of a mobile phone has already provided meaningful passive haptic feedback in some instances in our study, e.g. edges depicting the long shape of the virtual toothpick, it falls short for the majority of our explored cases. For example, the slim shape can not appropriately convey thicker objects. We believe a shape-changing form factor would allow the device to assume, at least partially, crucial parts of the object for a better haptic experience. For example, a growing edge could portray the thick handle of the umbrella or the clamp. A sharpening of the phone’s corner via a pin array would allow passive haptic feedback for sharper objects, such as the virtual toothpick. These features can provide a physical approximation of the virtual object’s shape and facilitate indirect interaction. However, the exploration of new and evolving form factors should also consider how to maintain the traditional use of the phone and ensure its balance with serving as an MR controller. Elaborated shape-changing abilities might allow users to switch between both use cases. The technical feasibility of this concept is suggested by the recent reintroduction of flip phones and the release of bendable phones on the consumer market which hold a promising outlook for a flexible shape of a future MR phone. Further, the built-in technology for providing real-time haptic feedback needs to be extended beyond the currently available vibrotactile motors and include components that allow the simulation of haptic sensations such as temperature or weight shift. While we recognise that utilising everyday objects, like mobile devices, for substitutional haptic explorations may limit the range of interactions that are possible compared to a dedicated MR controller, our findings demonstrate that everyday life objects can enable users to perceive and explore approximations of the properties of various virtual objects with different user-preferences depending on the object’s characteristics (Figure 3). Recent advancements, such as Apple’s Vision Pro, suggest that specialised MR controllers may become obsolete in future mixed reality experiences. This necessitates a re-evaluation by researchers and practitioners on how users can effectively explore, perceive, and interact with virtual objects, a direction to which our exploration of mobile devices as haptic interfaces provides strong fundamental work.

6.4 Limitations

There are a few limitations that are worth discussing. First, we investigated the use of a mobile phone as an initial important step towards haptic mobile MR interactions. We did not explore other form factors of mobile devices, such as tablets or smartwatches, which might impact the design of user-centred gestures and add to our design space. While we acknowledge that our design space for haptic mobile MR interactions is not exhaustive, we do believe it provides a comprehensive and detailed overview of gestures that are employed by users when using their smartphones for object exploration in MR. Second, we explored individual haptic attributes to construct an initial design space of user-centred gestures in MR. Exploring multiple haptic attributes simultaneously might reveal additional gestures, that can further broaden our design space in Figure 3. Nonetheless, as our findings show overlaps among the gestures and targeted object properties, our findings show a multi-attribute coverage of the gesture types. Furthermore, with future applications increasingly taking place in shared public spaces, making mid-air gestures less preferred by users [32, 34, 51], social acceptance becomes a critical consideration in designing haptic MR interactions for mobile contexts [13]. While our participants expressed that the presence of others around them in the semi-public study setup did not impact their gesture design, future studies should explore this aspect in a more controlled setting. Finally, all our participants in the gesture elicitation study were right-handed; therefore, gestures performed by users with their left hand being the dominant hand are not covered in our design space. While we believe this does not significantly impact the design space per se, it remains an unexplored area in our research that surely deserves attention in the future, including people with mixed handedness.

7 Conclusion

This paper investigates the use of mobile devices as replacements for specialised mixed reality (MR) controllers when exploring virtual objects in MR. In a 30-minute online survey on Prolific (n = 50) and a 1-hour gesture elicitation study (n = 18), we gained a deeper understanding of the tangible and haptic interactions users employ when exploring the size, shape, texture, hardness, temperature, and weight of virtual objects in MR. Our results revealed three core haptic exploration modes a mobile device takes on when used to understand MR object characteristics and properties: using the device as a substitutional MR object, as a replacement for hands or fingers, and as an additional tool for interaction, similar to how traditional MR controllers are used. Furthermore, we constructed and contributed a design space that maps over 200 gestures (17 participants × at least one gesture for each of the twelve objects) to ten high-level artefact-gestures, facilitating the simulation of sensation, interaction, and expected output haptics. In conclusion, we hope that our design space of haptic interactions and haptic feedback modalities inspires and guides researchers and practitioners when designing future mobile MR experiences that incorporate interactions with, and explorations of, virtual objects in MR.

Acknowledgments

This research benefited from the support of multiple funding bodies. Specifically, the German Research Foundation (DFG) under Germany’s Excellence Strategy (EXC 2077, University of Bremen) and the Swedish Research Council - 2022-03196. This research was also made possible through the International Postdoctoral Fellowships (IPF) of the University of St. Gallen. Furthermore, this work was conducted within the framework of the project ’Illusionary Surface Interfaces’, which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – 425869442 as part of the Priority Program SPP2199 Scalable Interaction Paradigms for Pervasive Computing Environments. We also thank SIGCHI and the internal funding from the University of St. Gallen for their instrumental role in realising the CHI Together event in St. Gallen. Moreover, we want to extend our appreciation to Luca-Maxim Meinhardt, Thomas Mildner and Ava Scott for their support in the development of the figures. Finally, we thank the reviewers for their thoughtful feedback and our participants for their valuable input.
In this paper, we used Overleaf’s built-in spell checker and the current version of ChatGPT (GPT 3.5). These tools helped us fix spelling mistakes and get suggestions to improve the writing of the paper. If not noted otherwise in a specific section, these tools were not used in other forms.

A Sankey Diagram for Gesture Categories and Types of Haptic Feedback

Figure 4:
Figure 4: Sankey diagram of the adaptation flows from haptic attributes, performed artefact-gesture and desired haptic feedback. The third column lists the 10 main gesture types identified in our user study, the last column lists the variety of envisioned haptic feedback.

Supplemental Material

MP4 File - Video Presentation
Video Presentation
Transcript for: Video Presentation
XLSX File - Excel sheet with ratings of Study 1
The excel sheet contains the data from study 1 and shows participants' ratings.
PDF File - 100 objects and their haptic properties
An overview of all 100 objects used in Study 1.

References

[1]
Mohammed Al-Sada, Keren Jiang, Shubhankar Ranade, Mohammed Kalkattawi, and Tatsuo Nakajima. 2020. HapticSnakes: Multi-haptic feedback wearable robots for immersive virtual reality. virtual Reality 24 (2020), 191–209.
[2]
Andrea Bianchi, Ian Oakley, and Dong Soo Kwon. 2011. Spinlock: A single-cue haptic and audio PIN input technique for authentication. In Haptic and Audio Interaction Design, Eric W. Cooper, Victor V. Kryssanov, Hitoshi Ogawa, and Stephen Brewster (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 81–90.
[3]
Ann Blandford, Dominic Furniss, and Stephann Makri. 2016. Qualitative HCI research: Going behind the scenes. Springer, Cham, Switzerland.
[4]
Tobias Boceck, Sascha Sprott, Huy Viet Le, and Sven Mayer. 2019. Force touch detection on capacitive sensors using deep neural networks. In Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services (Taipei, Taiwan) (MobileHCI ’19). Association for Computing Machinery, New York, NY, USA, Article 42, 6 pages. https://doi.org/10.1145/3338286.3344389
[5]
Victor Chen, Xuhai Xu, Richard Li, Yuanchun Shi, Shwetak Patel, and Yuntao Wang. 2021. Understanding the design space of mouth microgestures. In Proceedings of the 2021 ACM Designing Interactive Systems Conference (Virtual Event, USA) (DIS ’21). Association for Computing Machinery, New York, NY, USA, 1068–1081. https://doi.org/10.1145/3461778.3462004
[6]
Frederick Choi, Sven Mayer, and Chris Harrison. 2021. 3D hand pose estimation on conventional capacitive touchscreens. In Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction (Toulouse & Virtual, France) (MobileHCI ’21). Association for Computing Machinery, New York, NY, USA, Article 3, 13 pages. https://doi.org/10.1145/3447526.3472045
[7]
Heather Culbertson, Samuel B. Schorr, and Allison M. Okamura. 2018. Haptics: The present and future of artificial touch sensation. Annual Review of Control, Robotics, and Autonomous Systems 1, 1 (2018), 385–409. https://doi.org/10.1146/annurev-control-060117-105043
[8]
Florian Daiber, Donald Degraen, André Zenner, Tanja Döring, Frank Steinicke, Oscar Javier Ariza Nunez, and Adalberto L. Simeone. 2021. Everyday proxy objects for virtual reality. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI EA ’21). Association for Computing Machinery, New York, NY, USA, Article 101, 6 pages. https://doi.org/10.1145/3411763.3441343
[9]
Yuze Dan, Zhenjiang Shen, Jianqiang Xiao, Yiyun Zhu, Ling Huang, and Jun Zhou. 2021. HoloDesigner: A mixed reality tool for on-site design. Automation in Construction 129 (2021), 103808.
[10]
Jessica Q Dawson, Oliver S Schneider, Joel Ferstay, Dereck Toker, Juliette Link, Shathel Haddad, and Karon MacLean. 2013. It’s alive!: Exploring the design space of a gesturing phone. In Proceedings of Graphics Interface 2013. Canadian Information Processing Society, Mississauga, Ontario, Canada, 205–212.
[11]
Panteleimon Dimitriadis and Jason Alexander. 2014. Evaluating the effectiveness of physical shape-change for in-pocket mobile device notifications. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Toronto, Ontario, Canada) (CHI ’14). Association for Computing Machinery, New York, NY, USA, 2589–2592. https://doi.org/10.1145/2556288.2557164
[12]
Ze Dong, Jingjing Zhang, Xiaoliang Bai, Adrian Clark, Robert W. Lindeman, Weiping He, and Thammathip Piumsomboon. 2022. Touch-Move-Release: Studies of surface and motion gestures for mobile augmented reality. Frontiers in Virtual Reality 3, Article 927258 (2022), 21 pages. https://doi.org/10.3389/frvir.2022.927258
[13]
Pouya Eghbali, Kaisa Väänänen, and Tero Jokela. 2019. Social acceptability of virtual reality in public spaces: Experiential factors and design recommendations. In Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia (Pisa, Italy) (MUM ’19). Association for Computing Machinery, New York, NY, USA, Article 28, 11 pages. https://doi.org/10.1145/3365610.3365647
[14]
Lei Gao, Huidong Bai, Weiping He, Mark Billinghurst, and Robert W. Lindeman. 2018. Real-time visual representations for mobile mixed reality remote collaboration. In SIGGRAPH Asia 2018 Virtual & Augmented Reality (Tokyo, Japan) (SA ’18). Association for Computing Machinery, New York, NY, USA, Article 15, 2 pages. https://doi.org/10.1145/3275495.3275515
[15]
Mayank Goel, Jacob Wobbrock, and Shwetak Patel. 2012. GripSense: Using built-in sensors to detect hand posture and pressure on commodity mobile phones. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology (Cambridge, Massachusetts, USA) (UIST ’12). Association for Computing Machinery, New York, NY, USA, 545–554. https://doi.org/10.1145/2380116.2380184
[16]
Jeremy Hartmann, Aakar Gupta, and Daniel Vogel. 2020. Extend, push, pull: Smartphone mediated interaction in spatial augmented reality via intuitive mode switching. In Proceedings of the 2020 ACM Symposium on Spatial User Interaction (Virtual Event, Canada) (SUI ’20). Association for Computing Machinery, New York, NY, USA, Article 2, 10 pages. https://doi.org/10.1145/3385959.3418456
[17]
Fabian Hemmert. 2008. Ambient Life: Permanent tactile life-like actuation as a status display in mobile phones. In UIST ’08: Adjunct Proceedings of the 21st annual ACM symposium on User Interface Software and Technology (Monterey, California, USA, October 20-22, 2008) (Monterey, CA, USA) (UIST ’08). Association for Computing Machinery, New York, NY, USA, 2 pages.
[18]
Fabian Hemmert, Susann Hamann, Matthias Löwe, Anne Wohlauf, and Gesche Joost. 2010. Shape-changing mobiles: Tapering in one-dimensional deformational displays in mobile phones. In Proceedings of the Fourth International Conference on Tangible, Embedded, and Embodied Interaction (Cambridge, Massachusetts, USA) (TEI ’10). Association for Computing Machinery, New York, NY, USA, 249––252. https://doi.org/10.1145/1709886.1709936
[19]
Sungjune Jang, Lawrence H. Kim, Kesler Tanner, Hiroshi Ishii, and Sean Follmer. 2016. Haptic edge display for mobile tactile interaction. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). Association for Computing Machinery, New York, NY, USA, 3706––3716. https://doi.org/10.1145/2858036.2858264
[20]
Mohamed Kari and Christian Holz. 2023. HandyCast: Phone-based bimanual input for virtual reality in mobile and space-constrained settings via pose-and-touch transfer. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 528, 15 pages. https://doi.org/10.1145/3544548.3580677
[21]
Kangsoo Kim, Mark Billinghurst, Gerd Bruder, Henry Been-Lirn Duh, and Gregory F. Welch. 2018. Revisiting trends in augmented reality research: A review of the 2nd decade of ISMAR (2008–2017). IEEE Transactions on Visualization and Computer Graphics 24, 11 (2018), 2947–2962. https://doi.org/10.1109/TVCG.2018.2868591
[22]
Roberta L. Klatzky, Susan J. Lederman, and Victoria Metzger. 1985. Identifying objects by touch: An "expert system". Perception & psychophysics 37, 2 (1985), 133–146. https://doi.org/10.1080/07256868.2016.1141754
[23]
Jarrod Knibbe, Diego Martinez Plasencia, Christopher Bainbridge, Chee-Kin Chan, Jiawei Wu, Thomas Cable, Hassan Munir, and David Coyle. 2014. Extending interaction for smart watches: Enabling bimanual around device control. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1891––1896. https://doi.org/10.1145/2559206.2581315
[24]
Susan J. Lederman and Roberta L. Klatzky. 1987. Hand movements: A window into haptic object recognition. Cognitive Psychology 19 (1987), 342–368.
[25]
Susan J. Lederman and Roberta L. Klatzky. 1990. Haptic classification of common objects: Knowledge-driven exploration. Cognitive Psychology 22, 4 (1990), 421–459. https://doi.org/10.1016/0010-0285(90)90009-S
[26]
Susan J. Lederman and Roberta L. Klatzky. 1993. Extracting object properties through haptic exploration. Acta Psychologica 84, 1 (1993), 29–40. https://doi.org/10.1016/0001-6918(93)90070-8
[27]
Shuo-Fang Liu, Hsiang-Sheng Cheng, Ching-Fen Chang, and Po-Yen Lin. 2018. A study of perception using mobile device for multi-haptic feedback. In Human Interface and the Management of Information. Interaction, Visualization, and Analytics: 20th International Conference, HIMI 2018, Held as Part of HCI International 2018, Las Vegas, NV, USA, July 15-20, 2018, Proceedings, Part I 20. Springer, Springer, Cham, Switzerland, 218–226.
[28]
Joseph Luk, Jerome Pasquero, Shannon Little, Karon MacLean, Vincent Levesque, and Vincent Hayward. 2006. A role for haptics in mobile interaction: Initial design using a handheld tactile display prototype. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Montréal, Québec, Canada) (CHI ’06). Association for Computing Machinery, New York, NY, USA, 171–180. https://doi.org/10.1145/1124772.1124800
[29]
Jens Maiero, David Eibich, Ernst Kruijff, André Hinkenjann, Wolfgang Stuerzlinger, Hrvoje Benko, and Gheorghita Ghinea. 2019. Back-of-device force feedback improves touchscreen interaction for mobile devices. IEEE transactions on haptics 12, 4 (2019), 483–496.
[30]
Akhmajon Makhsadov, Donald Degraen, André Zenner, Felix Kosmalla, Kamila Mushkina, and Antonio Krüger. 2022. VRySmart: A framework for embedding smart devices in virtual reality. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI EA ’22). Association for Computing Machinery, New York, NY, USA, Article 358, 8 pages. https://doi.org/10.1145/3491101.3519717
[31]
Shady Mansour, Pascal Knierim, Joseph O’Hagan, Florian Alt, and Florian Mathis. 2023. BANS: Evaluation of Bystander Awareness Notification Systems for productivity in VR. In Network and Distributed Systems Security (NDSS) Symposium (San Diego, CA, USA). Internet Society, Reston, VA, USA, 19 pages. https://doi.org/10.14722/usec.2023.234566
[32]
Florian Mathis, Joseph O’Hagan, Kami Vaniea, and Mohamed Khamis. 2022. Stay home! Conducting remote usability evaluations of novel real-world authentication systems using virtual reality. In Proceedings of the 2022 International Conference on Advanced Visual Interfaces (Frascati, Rome, Italy) (AVI 2022). Association for Computing Machinery, New York, NY, USA, Article 14, 9 pages. https://doi.org/10.1145/3531073.3531087
[33]
Florian Mathis, Kami Vaniea, and Mohamed Khamis. 2021. Observing virtual avatars: The impact of avatars’ fidelity on identifying interactions. In Proceedings of the 24th International Academic Mindtrek Conference (Tampere/Virtual, Finland) (Academic Mindtrek ’21). Association for Computing Machinery, New York, NY, USA, 154–164. https://doi.org/10.1145/3464327.3464329
[34]
Daniel Medeiros, Romane Dubus, Julie Williamson, Graham Wilson, Katharina Pöhlmann, and Mark McGill. 2023. Surveying the social comfort of body, device, and environment-based augmented reality interactions in confined passenger spaces using mixed reality composite videos. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 7, 3, Article 113 (sep 2023), 25 pages. https://doi.org/10.1145/3610923
[35]
Claudio Pacchierotti, Stephen Sinclair, Massimiliano Solazzi, Antonio Frisoli, Vincent Hayward, and Domenico Prattichizzo. 2017. Wearable haptic systems for the fingertip and the hand: Taxonomy, review, and perspectives. IEEE Transactions on Haptics 10, 4 (2017), 580–600. https://doi.org/10.1109/TOH.2017.2689006
[36]
Cecilia Panigutti, Andrea Beretta, Fosca Giannotti, and Dino Pedreschi. 2022. Understanding the impact of explanations on advice-taking: A user study for AI-based clinical Decision Support Systems. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 568, 9 pages. https://doi.org/10.1145/3491102.3502104
[37]
Gianluca Paolocci, Tommaso Lisini Baldi, Davide Barcelli, and Domenico Prattichizzo. 2020. Combining wristband display and wearable haptics for augmented reality. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). IEEE, New York, NY, USA, 632–633. https://doi.org/10.1109/VRW50115.2020.00167
[38]
Martin Pielot, Karen Church, and Rodrigo de Oliveira. 2014. An in-situ study of mobile phone notifications. In Proceedings of the 16th International Conference on Human-Computer Interaction with Mobile Devices & Services (Toronto, ON, Canada) (MobileHCI ’14). Association for Computing Machinery, New York, NY, USA, 233–242. https://doi.org/10.1145/2628363.2628364
[39]
Ismo Rakkolainen, Euan Freeman, Antti Sand, Roope Raisamo, and Stephen Brewster. 2021. A survey of mid-air ultrasound haptics and its applications. IEEE Transactions on Haptics 14, 1 (2021), 2–19. https://doi.org/10.1109/TOH.2020.3018754
[40]
Majken Kirkegård Rasmussen, Timothy Merritt, Miguel Bruns Alonso, and Marianne Graves Petersen. 2016. Balancing user and system control in shape-changing interfaces: A designerly exploration. In Proceedings of the TEI ’16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction (Eindhoven, Netherlands) (TEI ’16). Association for Computing Machinery, New York, NY, USA, 202––210. https://doi.org/10.1145/2839462.2839499
[41]
Andreas Riegler, Andreas Riener, and Clemens Holzmann. 2020. A research agenda for mixed reality in automated vehicles. In Proceedings of the 19th International Conference on Mobile and Ubiquitous Multimedia (Essen, Germany) (MUM ’20). Association for Computing Machinery, New York, NY, USA, 119–131. https://doi.org/10.1145/3428361.3428390
[42]
Bahador Saket, Chrisnawan Prasojo, Yongfeng Huang, and Shengdong Zhao. 2013. Designing an effective vibration-based notification interface for mobile phones. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work (San Antonio, Texas, USA) (CSCW ’13). Association for Computing Machinery, New York, NY, USA, 149––1504. https://doi.org/10.1145/2441776.2441946
[43]
Majed Samad, Elia Gatti, Anne Hermes, Hrvoje Benko, and Cesare Parise. 2019. Pseudo-haptic weight: Changing the perceived weight of virtual objects by manipulating control-display ratio. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland, UK) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3290605.3300550
[44]
Elaheh Samimi and Robert J. Teather. 2022. Multi-touch smartphone-based progressive refinement VR selection. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). IEEE, New York, NY, USA, 582–583. https://doi.org/10.1109/VRW55335.2022.00142
[45]
Craig Shultz and Chris Harrison. 2023. Flat panel haptics: Embedded electroosmotic pumps for scalable shape displays. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 745, 16 pages. https://doi.org/10.1145/3544548.3581547
[46]
Marco Speicher, Philip Hell, Florian Daiber, Adalberto Simeone, and Antonio Krüger. 2018. A virtual reality shopping experience using the apartment metaphor. In Proceedings of the 2018 International Conference on Advanced Visual Interfaces (Castiglione della Pescaia, Grosseto, Italy) (AVI ’18). Association for Computing Machinery, New York, NY, USA, Article 17, 9 pages. https://doi.org/10.1145/3206505.3206518
[47]
Carolin Stellmacher, Nadine Wagener, and Kuba Maruszczyk. 2021. Enhancing VR experiences with smartwatch data. In Workshop on Everyday Proxy Objects for Virtual Reality (EPO4VR) at CHI ’21 (Yokohama, Japan). Association for Computing Machinery, New York, NY, USA, 6 pages.
[48]
Carolin Stellmacher, André Zenner, Oscar Javier Ariza Nunez, Ernst Kruijff, and Johannes Schöning. 2023. Continuous VR weight illusion by combining adaptive trigger resistance and control-display ratio manipulation. In 2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR). IEEE, New York, NY, USA, 243–253. https://doi.org/10.1109/VR55154.2023.00040
[49]
Hemant Bhaskar Surale, Aakar Gupta, Mark Hancock, and Daniel Vogel. 2019. TabletInVR: Exploring the design space for using a multi-touch tablet in virtual reality. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland, UK) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3290605.3300243
[50]
Xinyi Tao, Keyu Wu, and Yujie Yang. 2021. The effects of vibration on assisting game play and improving player engagement when lacking sound. In HCI in Games: Experience Design and Game Mechanics: Third International Conference, HCI-Games 2021, Held as Part of the 23rd HCI International Conference, HCII 2021, Virtual Event, July 24–29, 2021, Proceedings, Part I. Springer-Verlag, Berlin, Heidelberg, 389–407. https://doi.org/10.1007/978-3-030-77277-2_30
[51]
Ying-Chao Tung, Chun-Yen Hsu, Han-Yu Wang, Silvia Chyou, Jhe-Wei Lin, Pei-Jung Wu, Andries Valstar, and Mike Y. Chen. 2015. User-defined game input for smart glasses in public space. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI ’15). Association for Computing Machinery, New York, NY, USA, 3327–3336. https://doi.org/10.1145/2702123.2702214
[52]
Jamie Ullerich, Maximiliane Windl, Andreas Bulling, and Sven Mayer. 2023. ThumbPitch: Enriching thumb interaction on mobile touchscreens using deep learning. In Proceedings of the 34th Australian Conference on Human-Computer Interaction (Canberra, ACT, Australia) (OzCHI ’22). Association for Computing Machinery, New York, NY, USA, 58–66. https://doi.org/10.1145/3572921.3572925
[53]
Dangxiao Wang, Yuan Guo, Shiyi Liu, Yuru Zhang, Weiliang Xu, and Jing Xiao. 2019. Haptic display for virtual reality: Progress and challenges. Virtual Reality & Intelligent Hardware 1, 2 (2019), 136–162. https://doi.org/10.3724/SP.J.2096-5796.2019.0008
[54]
Jacob O. Wobbrock, Meredith Ringel Morris, and Andrew D. Wilson. 2009. User-defined gestures for surface computing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Boston, MA, USA) (CHI ’09). Association for Computing Machinery, New York, NY, USA, 1083––1092. https://doi.org/10.1145/1518701.1518866
[55]
Koji Yatani. 2009. Towards designing user interfaces on mobile touch-screen devices for people with visual impairment. In User Interface Software and Technology (UIST) (Victoria, BC, Canada) (UIST ’09, Vol. 9). Association for Computing Machinery, New York, NY, USA, 4 pages.
[56]
Yen-Ting Yeh, Fabrice Matulic, and Daniel Vogel. 2023. Phone sleight of hand: Finger-based dexterous gestures for physical interaction with mobile phones. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 519, 19 pages. https://doi.org/10.1145/3544548.3581121
[57]
Xin Yi, Shuning Zhang, Ziqi Pan, Louisa Shi, Fengyan Han, Yan Kong, Hewu Li, and Yuanchun Shi. 2023. Squeez’In: Private authentication on smartphones based on squeezing gestures. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 532, 15 pages. https://doi.org/10.1145/3544548.3581419
[58]
Tianyu Yu, Weiye Xu, Haiqing Xu, Guanhong Liu, Chang Liu, Guanyun Wang, and Haipeng Mi. 2023. Thermotion: Design and fabrication of thermofluidic composites for animation effects on object surfaces. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 425, 19 pages. https://doi.org/10.1145/3544548.3580743
[59]
André Zenner and Antonio Krüger. 2019. Estimating detection thresholds for desktop-scale hand redirection in virtual reality. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, New York, NY, USA, 47–55. https://doi.org/10.1109/VR.2019.8798143
[60]
André Zenner, Kora Persephone Regitz, and Antonio Krüger. 2021. Blink-suppressed hand redirection. In 2021 IEEE Virtual Reality and 3D User Interfaces (VR). IEEE, New York, NY, USA, 75–84. https://doi.org/10.1109/VR50410.2021.00028
[61]
Feng Zhou, Henry Been-Lirn Duh, and Mark Billinghurst. 2008. Trends in augmented reality tracking, interaction and display: A review of ten years of ISMAR. In 2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality. IEEE, New York, NY, USA, 193–202. https://doi.org/10.1109/ISMAR.2008.4637362

Cited By

View all
  • (2024)Becoming Q: Using Design Workshops to Explore Everyday Objects as Interaction Devices for an Augmented Reality Spy GameProceedings of the 2024 ACM Symposium on Spatial User Interaction10.1145/3677386.3682096(1-11)Online publication date: 7-Oct-2024

Index Terms

  1. Exploring Mobile Devices as Haptic Interfaces for Mixed Reality
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems
    May 2024
    18961 pages
    ISBN:9798400703300
    DOI:10.1145/3613904
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 11 May 2024

    Check for updates

    Badges

    Author Tags

    1. gesture elicitation
    2. haptic exploration
    3. haptic feedback
    4. haptic interfaces
    5. mixed reality
    6. mobile gestures
    7. mobile phones

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    • SIGCHI
    • HSG

    Conference

    CHI '24

    Acceptance Rates

    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI 2025
    ACM CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)1,753
    • Downloads (Last 6 weeks)350
    Reflects downloads up to 11 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Becoming Q: Using Design Workshops to Explore Everyday Objects as Interaction Devices for an Augmented Reality Spy GameProceedings of the 2024 ACM Symposium on Spatial User Interaction10.1145/3677386.3682096(1-11)Online publication date: 7-Oct-2024

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media