Efficient Gestures in Users’ Preference, Health, and Natural Inclination for Non-Touch-Based Interface


Bachelor Thesis, 2015

76 Pages


Excerpt


TABLE OF CONTENTS

INTRODUCTION
Statement
Theoretical Framework
Motivation
Hypothesis and Research Questions
Contribution

LITERATURE REVIEWS
User Preference
Defining Usability
Intuitive Gestures
Technological Biases
Gesture Elicitation Methods

METHODOLOGY
Setting and Participants
Instrumentation
Tasks for the Participants
Procedure
Data Processing and Analysis
Validity

RESULTS
Consistency in User’s Preference For Gestures
Relationship Between Stress And User’s Preference
Correlation Between User’s Preference And Types of Gestures
Influences of Commands on Gestures
Association Between The Appropriateness of Tasks And Stress
Challenging Gestures
Low Stress Gestures

DISCUSSION

CONCLUSION

REFERENCES

APPENDICES
Appendix A: Questionnaires of User Survey
Appendix B: Activation Command
Appendix C: Scroll Command
Appendix D: Page Command
Appendix E: Selection Command
Appendix F: Undo Command
Appendix G: Return Command
Appendix H: Challenging Gestures

ABSTRACT

Along with the number of technologies that have been introduced over a few years ago, gesture-based human-computer interactions are becoming the new phase in encompassing the creativity and abilities for users to communicate and interact with devices. Current gesture recognition systems lack sufficient consideration for the impact they have on user’s health and preference during usage. Thus, determining and defining low-stress and intuitive gestures for user interaction is necessary for improving human-computer communication and increasing the usability of gesture-driven devices. To find the relationship, if any, between stress and intuitive gestures, a Galvanic Skin Response instrument was used and worn by fifteen participants (10 females, 5 males) as a primary measurement of stress. The participants also engaged in creating and performing their own gestures for specified tasks, which include “activation of the display,” “scroll,” “page,” “selection,” “undo,” and “return to main menu.” They were then asked to repeat their gestures for around ten seconds each, giving them time and further insight into how their gestures would be appropriate or not for them and any given task. Surveys were given to the users at different times: one after they had defined their gestures and another after they had repeated their gestures. It is found that they have the tendency to rank their gestures based on comfort, intuition, and the ease of communication. On an average of more than half of the times, users were also found to change the gestures they have preferred in the beginning after practicing their gestures. Out of the user-ranked gestures, health-efficient gestures, given that the participants’ rankings were based on comfort and intuition, were chosen in regards to the highest ranked gestures: a single or double taps, single or double claps, circle arm, slide, wave, swipe, point, hold up numbers, draw in mid-air, and form a shape.

ACKNOWLEDGMENTS

This project would not have been possible without the support of distinguished people. I take this opportunity here to express my deep gratitude to Dr. Erin Walker and Andreea Danielescu for all their help, support, and expert advices. I also thank my parents.

INTRODUCTION

Every year, hundreds of millions of mobile hand-held devices integrated with touch screens are purchased all over the world. Every day, the average teenager from the age of thirteen to seventeen spends around 7.5 hours on technologies, including personal computers, smartphones, and video games (Jay, 2015). In examination of these advanced and popular technological devices, they all commonly require little to almost no physical activities beyond hand usage. The devices also allow users to directly request features with a simple touch of a finger or through an intermediate instrument, such as the mouse and/or keyboard. Requiring from users minimal to almost no bodily usage has and continues to play an important role in long-term usability and user preference for these touch screen devices. However, these same technologies restrict users to a narrow space in which they are holding and interacting with a medium, allowing little freedom for users to move around and introducing a high amount of stress and muscle pain from long usage. Because of these concerns, current research are recognizing and building free-space interfaces. Growing research studies and prototypes for non- touch gesture-driven interfaces will also need to consider health aspects and the physical ways users move during interaction.

Through non-touch gesture-driven interfaces, users are required to perform bodily movements in front of a camera sensor without needing to touch the actual interface or use an intermediate instrument to communicate. For example, the Xbox Kinect, which is a motion tracking system as a means to provide free-space interactivity in video games, allows a user to stand before the sensor and thereby reads the user’s body movement as an input. Free-space interfaces provide opportunity for users to make full use of their bodies, in order to communicate with complex computational systems. Conversely, by physically demanding users to perform gestures, they pose physiological health challenges. One such challenge is the amount of physical stress that is placed on the users for any length of time when using such interfaces.

Additionally, based on the attributes of gestures, such as the number of movements required and the number of body parts used, users may lose motivation and/or become frustrated in having to accurately perform the same gestures multiple times for simple tasks, such as selecting an object or browsing through multiple pages presented in a system. Hence, they may lose interest in using such interface. Nonetheless, user preference has become a major issue during and after interaction as it plays a deep-seated role in the usability of devices. Due to these health-related challenges, it is important to investigate the relationship between these challenges and user preference, as well as the manners in which users will engage in mid-air gesture interaction. Understanding how users define their gestures and the influences that affect their decision will ultimately ensure the long-term usability, and thus longevity, of free-space interactions in the general marketplace. In the next section, the current thesis further discussed the background, research questions, motivation and contributions of the thesis. After, the paper will provide different reviews of literature. The Result section, the current thesis presents factors, such as physiological stress and psychological responses, which have been found to influence users in various aspects of gestural interaction with a device. Later in the Discussion section, the paper identifies efficient gestures, which are determined based on gestural attributes and health statistics found in the observational studies, as well as tailored to the users’ preference towards gestures and the quality of tasks.

STATEMENT

Despite the maturity of gesture-driven technologies, they have yet to provide a higher level means of communication while recognizing simple gestures that would allow computers to fundamentally understand human meanings and commands. Currently, these technologies serve as instruments of control, rather than for high-end correspondence with autonomous devices. Although robust and intuitive, these interfaces are still far from human-to-human communication standards; but should allow users to freely use more than just their hands while not expecting users to exert a large amount of force for any kind of tasks, simple or complex. With this consideration, health should become an important concern for free-space interfaces where usability would impact user’s health over time (short- or long-term), as well as their experience. What has been a subtle issue in research is long-term usability. It is challenging to foresee how user preference evolves over time when using gestures that may or may not be intuitive to them. Thus, the gap of knowledge within the research community is the relationship between potential physiological stress and user’s preference over a long period of time during gestural interaction with such mid-air gesture-driven devices.

THEORETICAL FRAMEWORK

Recent studies have achieved reachable level of gestural communication by means of wearable technologies, touch-based displays, and intuitive gesture recognition systems, such as Microsoft Kinect and Leap Motion. Touch-based devices, however, restrain users to directly handle them as interaction. On the other hand, current non-touch-driven devices, such as the Kinect, constrain users to accurately move their body parts parallel to elements (e.g. cursor) on screen for appropriate controls of the elements (in other words, where the body part has to align with the position on screen and acts as direct remote control). Depending on the length of usage, these interactions can burden users with physical constraints, like pressing power for a device and calibration, or using touch-screen and accelerometer-embedded wearable devices. Research must move away from these potential and existing constraints in favor of free-space interfaces (or “non-touch-based interfaces”): this will make gestures appropriate for control tasks in human-computer interaction (HCI); and advance human-computer communication by narrowing the distance between HCI performance and human-human interaction (Lindeberg, Sundblad, Bretzner, 2001; Elmezain, Al-Hamadi, Rashid 2009). Research must also consider user health conditions and long-term usage, which are some of the factors that contribute to the appropriateness of gestures for specific tasks within such interfaces. Without such research ventures, a potential alternative interface would not have lasting usability among existing ones, such as common keyboards, mice, and touch screens.

In studies where the interfaces demand extensive user physical capabilities, defining a gestural language should be most appropriate to the general users. However, the process of defining the language—a set of gestures suitable for an expected interface—becomes a challenge within itself. As users vary from backgrounds and cultures, gestural languages must consist of popular types of gestures that a majority of users would most prefer to allow the ease of users learning the gestures, and ensuring the appropriateness of those gestures to achieve certain computational results. In addition, a gesture language for in-air interactions has yet to be explored in a task-driven scenario (Valdes, Eastman, and Grote, 2014). Valdes et al. state that the exploration of user’s expectations relative to tasks, or actions that the users must achieve, provide further design considerations of the tangible user interaction. The same can be said for in-air gestural interaction.

While a verbal language is made up of different conjugates and tenses, a gestural language would have various orientations and types of gesticulation. The following types of gesticulation have been identified by Athavale and Deshmukh (2014, p. 39): symbolic, deictic, iconic and pantomimic. Firstly, symbolic gesture is a gesture that has a specific meaning, which is understood by individuals of the same culture; while, deictic gesture is a gesture that directs the attention of a recipient in a conversation to a specific place or object in a given environment. For example, the action of pointing to a specific place on screen would be a deictic gesture. The term, iconic, refers to gestures that represent objects and/or actions that would refer to and be understood in contexts (e.g. representing tree climbing by having outstretched arms with semi- closed hands). Lastly, pantomimic gesture is another type of gesture that portrays objects or actions without accompanying speech (e.g. mimicking animals or forming a fist to represent a rock). These types of gesture become important in the process of designing a gestural language as they initially provide designers with insights on specifications that may further help them understand user’s preference for gestures.

An important aspect to consider for the interface is how user interaction may or may not change over a period of time. In gesture elicitation studies, gestures may seem appropriate to users for a specified task but a user’s enthusiasm may decrease when taking into account the long-term usability of mid-air interfaces. For example, the level of fatigue that arises from unsupported bodily movements affects a user’s enthusiasm. While the focus of much of research is on the development of gesture recognition systems and designing an intuitive gestural language, considering health aspects at an early stage can provide significant insights into the building process of engineering mid-air gesture-driven interfaces. Particularly in gestural interaction, gestures that require hand or arm movements without support may be unsuitable for repetition due to the chances of user becoming fatigue (Lenman, Bretzner, and Thuresson, 2002). Early researchers have investigated exhaustion and unnatural forms from interaction with existing free-hand prototypes, but have yet to address such factors in free-space gesture-based interfaces.

Because of the different restrictions of existing technologies and the number of considerations, engineers and designers must find ways to facilitate the communication between human users and machines while lengthening their usability. Ergonomics is a relevant area to discuss regarding how users may handle free-space interface, specifically the manner of their movement. Investigating the ergonomics of mid-air gestures may help to understand the usability and the effectiveness of users, if not gestures, in relations to such interface. The efficiency of the gesture can be determined by the level of muscular tension, the duration, and space in which a particular gesture is performed. While assuring salient interaction, the efficiency of a gesture can support user’s comfort and allow less energy exerted. Theoretically, the most comfortable posture and action during user’s interaction is one that demands little to almost no bodily movement. Kolsch et al. (2003) identifies that a comfortable arm or hand movement requires minimal trunk motion while allowing the trunk freedom to move. With regards to arm gestures, musculoskeletal issues must also be considered due to the physical limitations of finger and wrist movement (Muscovich and Hughes, 2006).

MOTIVATION

At a stage of identifying and designing gestural interfaces, stress is a subtle but significant concern to consider. In fact, it is one of the largest issues in workplaces, being not only a health concern but also a leading factor in the usability design of computational devices (Murphy and Schoenborn, 1993). Given the maturity of gesture-driven technologies, identifying and establishing a set of simple gestures that would require the least amount of physical exertion for users will support accessibility and affect convenience as well as a worthwhile experience for the general users.

HYPOTHESIS AND RESEARCH QUESTIONS

The current paper emphasizes the communicative attributes of human users, and the conceptual and ideal feature of mid-air interfaces to read any gestures. It identifies efficient gestures by comparing the amount of physical stress and user’s preference of the gestures and the gestural types. The hypothesis is that there would be a correlation between user preference and certain types of gestures, as well as a negative correlation between user preference and stress that proceeds from performing gestures. In addition, the hypothesis includes a twenty-five percent change in user preference for a set of gestures for each task over time, due to the likelihood that users may change their preference for certain gestures after a period of time.

As the goal of the research is to understand the science behind user preference for communicative gestures and what would makes efficient gestures for users, the thesis examines the factors that contribute to stress during the performance of gestures and the changes in user preference. Throughout the study, it seeks to answer the following questions:

1. Is there a correlation between a user’s preference for particular gestures and the tasks given to the user? (More details about tasks are given in the Methodology section.)
2. How consistent is user’s preference for gestures they have defined for specific tasks?
3. Is there a relationship between the appropriateness of tasks and stress?
4. Does the specific nature of a task affect the type of gesture performed, which could give us insight to or more information about the specific factors that influence user’s preferences during general usages of said mid-air devices?
5. Is there a relationship between stress and user’s preference?
6. Will there be any low stress gestures after comparisons between user preference and stress?

With these questions in mind, the hypothesis can be proven based on the relationships that would be found and any influences of the relationships on user creation of gestures.

CONTRIBUTION

The thesis focuses on discovering and providing a low-stress gesture vocabulary by examining user preference and the types of gestures (e.g. deictic and iconic), as well as providing a guideline for developing the language for backend computational systems that would allow computers to recognize human structure and gestures. Ultimately, it contributes a set of gestures that would, after analysis of results, be considered most comfortable, preferred, and intuitive to the users. In the times where people are spending a third of their time on using technologies, these gestures could support users’ long-term interaction with the devices and minimize stress for them during the process. The current paper also employs and discusses a different gesture elicitation method, which was used in this study to elicit user-defined gestures and observe user preference for gestures over time for each task. The paper sheds light on factors that affect user’s gestures, in hope of illuminating areas that concern physical health and user enthusiasm in long- term free-space interaction.

LITERATURE REVIEW

Many studies have investigated and discussed defining an appropriate and intuitive gestural language for mid-air interfaces as well as the challenges that have arose from the investigation. Several researchers have found through their studies that some challenges lie in at least one of the following factors: user’s technological biases; the variation in gestures among users; and the limitations of elicitation studies. These problems significantly affect user’s preference and performance of gestures, leading users to create gestures that would be unfavorable and inappropriate for long-term usage of devices. This section provides a critical review of research related to the topics that have driven the cores of the thesis.

User Preference

Often, gestures are not designed based on user preference or needs but on the requirements and flexibility of a system (Wobbrock and Wilson 2009). User preference is one of the most important considerations in creating an intuitive gesture vocabulary for systems. For this matter, it has become significant to understand the nature of various gestures preferred by users and the common trends of user-defined gestures. Several researchers found in their studies that gestures collectively and commonly defined and performed by many users were considered most appropriate for specific commands and tasks. In their shared studies on user interaction with computational surfaces, Wobbrock, Morris and Wilson (2009) have noted that users exhibit similar gesture patterns to perform physically and conceptually simple gestures, which are either symbolic or physical and use only one hand. Interestingly, these gestures are defined by many people and rated higher on a Likert Scale than those defined by fewer people. Similarly, in Wu’s and Wang’s research on free-space gestures for TV viewing, they mention that their participants favored and gave higher rating to small-scale and simple gestures that were symbolic and requires the use of one hand (2013). They also highlight that providing a situational context for participants influences their preference for such gestures. Users were most interested in metaphorical, meaningful gestures as they correlate the gestures to familiar cognitive habits formed in the real world.

The design process involving end users for identifying a gesture language becomes a way to incorporate different aspects of user consideration and interaction. Wu and Wang (2013) observed that gestures developed by more end-users are considered easier to learn and remember, very comfortable and suitable for various commands. Showing in their studies that only 60.9% of gestures authored by HCI designers were used by participants, Wobbrock et al. (2009) assert that experienced researchers and designers cannot fully predict a set of gestures that can meet user preferences and expectations of interaction. The challenge of creating gestures lies within the negotiation between user-defined and designer-defined gesture sets. Although Wobbrock et al.’s research focuses on eliciting gestures for tabletop interfaces, their conclusion gave insight into the gap in research involving the correspondence of user preference and intuitive gestures. On mid-air interfaces, Aigner et al. (2012) have also noted:

[…]there is little consensus among users in association between gestures and their expected effect.”1

As seen with past research, bridging user preferred gestures and appropriate communicative effect would provide a user-friendly interface.

While the aforementioned studies provide useful information and observation on user- preferred gestures, they have not examined in-depth the underlying factor that contributes to the comfort of gestures. One should investigate the stress that results from these user-preferred and intuitive gestures and determine a set of gestures that would be most appropriate (i.e. comfortable over a period of time and favorable) for users to perform. Although the research also illustrated and reported the issue related to the gap between designer-authored and user-defined gestures, they should consider how these user-defined gestures change over time depending on the accessibility and ease-of-use of the free-space interfaces.

Defining Usability

In examining usability of gestures for free-space interfaces, time and energy expense for the users are factors that need the most attention when designing a gestural language. There is a small group of HCI researchers concerned with the long-term effect of user interaction with free- space interfaces. These researchers emphasize the issue with usability in performing gestures. They state and establish four common principles (out of five) of usability: learnability, efficiency, memorability, and errors (Cabral et al. 2005; Nielsen et al. 2003). On the fifth principle, Cabral et al. identifies the principle as satisfaction, which includes subjective factors, such as trust and comfort. Whereas Nielsen et al. introduces the notion of coverage, which defines the number of discoverable general operators out of the total. Since performing gestures over a long period of time can be physically stressful, free-space interfaces should in fact accommodate user’s needs in an alternative and useful way to other existing interfaces (Cabral et al. 2005).

Usability is significant in understanding the impact of mid-air interfaces on user intuition and even more significant in visualizing the amount of physical exertion during usages. In their 3D visualization and navigation experiments, Cabral et al. (2005) investigated the usability of a gestural interface and how well users would respond to the interface during interaction. They mention that users would spend some time familiarizing themselves with both the interface and the visual application. The Cabral et al. results show that due to the lack of user familiarity of such interfaces and the large extent of gestures compared to usages of mouse and other devices, they performed worse on pointing tasks when using a mouse as an input device. Several participants also reported arm fatigues. According to Cabral et al. (2005), user’s fatigue increase when gestures are used to complete pointing tasks compared to when users achieve the same task with a mouse; and using gestures even in a short amount of time would cause fatigue. Similarly, in Farhadi-Niaki et al. (2013) study on gestures for desktop tasks, there was less fatigue when using finger gestures instead of arm gestures because finger gestures are more natural.

Nonetheless, the principles of usability have become important to any interface, which create a structure, sequencing, feedback, and gestures from the user interface (Nielsen et al. 2003). However, Nielsen et al. state that as gestures are rather complicated with many attributes, the problem remains with the application and the interface themselves. Morris et al. (2006) point out that users must discover and learn how to use and perform gestures effectively to take advantage of free-space interfaces. In their study that investigated the immediate usability of Graffiti gestures, they found that users could learn the entire gesture set in under five minutes (Morris et al., 2006).

Although the discussed research provides guidelines and introduces the notion of usability, further details on each sub-topic must take into account those guidelines, especially in the component of efficiency. The limitation of both Cabral et al. and Nielsen et al. principles is that they should denote not only the recommendation for designing a language for a system, but also the consideration of designing for time-space efficient user interaction with the system. Their idea of usability should incorporate the permissibility and enablement of general users to efficiently perform gestures. This should encompass the length of time, amount of space, and effort put in to perform the gestures; the ideal efficiency would be with the factors of less time and effort used for gestures in a sufficient amount of space. The current paper moves away from manipulative gestures, which are those that control objects in space and are also a main focus of Nielsen et al. research. It also investigates the long-term performance of gestures as well as identifies efficient gestures (i.e. low-stress, most preferred, and assuming to be less time- consuming).

Intuitive Gestures

Examination and design of mid-air gesture vocabularies remain challenging for researchers, as the process requires different considerations of the set of gestures that would be intuitive to a large population of users. Meaningful gestures are either manipulative or communicative (Truyenque, 2005; Pavlovic et al., 1997). Current interfaces are limited to users employing manipulative gestures, which are those that are intended to control the behavior or movement of computer-generated objects on screen. To move towards an interface that facilitates human-computer communication, it would require users to employ communicative gestures, which possess an inherent communicative property and can be either an act or symbol (Pavlovic et al., 1997). In the framework of gesture interaction with the hand, Truyenque (2005) writes that manipulative hand gestures have longer gesture interval lengths than communicative gestures (p. 30). For this reason, free-space interfaces should encompass many communicative gestures to narrow the gap towards human-to-human communication. Although communicative gestures are most recommended and systems should read such gestures, Karam and Schraefel (2005) state that many systems employed a variety of gesture styles, including semaphoric (that is, using a form of signs to communicate information) and manipulative ones. As also found in their investigation of the types of gestures that users would be most inclined to perform, Aigner et al. (2012) report that users employed a wide variety of gestures to accomplish the same computational result, but the type of gesture that they performed was frequently consistent across time and even participants. For a selection command, participants were seen to use pantomimic gestures (e.g. grasping) with one hand. For a navigation command, several gestural types were used. For instance, manipulation gestures were often used to move a cursor on screen a specific position on screen and semaphore gestures were employed to indicate the direction of the movement.

In studying user intuition, it is crucial to note factors that make gestures most accessible and natural to users. Past research findings on gesture designs have found user preference and inclination towards several types of gestures, such as deictic and direct manipulation gestures. Studying how inclined users would be to use communicative gestures that have not been commonly applied before for existing interfaces and would provide significant gains in designing for free-space gestural interfaces. Several researchers have supported the idea of further defining a gesture taxonomy that would help improve upon existing technological interfaces, such as the Leap Motion (Vafaei, 2013; Nielsen et al., 2003). In discussing approaches for finding gestures, Nielsen et al. (2003) states:

“In order to ensure intuitive and logical mapping, the first step is an investigation into the interfaces of known applications is conducted to identify the needed functionalities” (p. 6).

Researchers informed on the design of gestural language present elementary attributes of gestures that help to provide simple analysis and consideration for the design of gestures. In his study on the taxonomy of gestures, Vafaei (2013) presents a detailed statistical representation of different attributes of gestures: nature, form, binding (i.e. dependency or independency of screen world), temporal (i.e. continuous or discrete), context (i.e. whether gestures require context or not), dimensionality (along world coordinate x-, y-, or z-axis), body parts used, and more. On the notion of nature, he presents five different types of gestures: pantomimic, pointing, manipulative, symbolic, and abstract. He reports in a graph that pantomimic, pointing, and symbolic types of gestures were not used as often as manipulative and abstract for a number of commands. For a selection command, participants performed a range of these types, of which pantomimic gestures were most frequent. In his analysis of body parts that participants employed, arms were dominantly and popularly applied to achieve different computational results across all commands.

Finding and identifying a set of intuitive yet appropriate gestures for a system remains challenging. Across different users and even for one user, the same gesture varies in shape, trajectory, and duration (Elmezain et al. 2009). Gestures also vary in meanings from each participant (Morris et al., 2014). For instance, Morris et al. mention that some participants felt that two-handed gestures dealt with “big” while some felt they dealt with “fast.” Nielsen et al. discusses the risk that would arise from the users’ performance and ability to remember each gesture when using a gesture vocabulary. As each gesture can be identified using descriptive labels to describe the movement of the gesture, there are concerns where gestures may become too similar, which may cause users to substitute one gesture for another. The lack of intuition affects the users’ performance and ability to remember gestures. Examining the underlying factor of user’s ability to remember gestures, Quek et al. (2002) presents catchment: a unifying concept that associates different discourse components. The logic of catchment is that if users have recurring thoughts of particular visuo-spatial imageries, they will generate or perform recurrent gesture features. In other words, recurrent images suggest common discourse theme. Because users differ in how they perceive the world in related and unrelated components, using catchments will allow researchers to identify characteristics and/or cognitive styles.

While the above studies supply practical and compelling information on the notion and both elementary and higher-level natures of gestures in regards to user intuition, there is little to almost no mention of how trends in user-defined gestures may or may not change over a period of time based on other factors, such as context and visual feedback of systems used to elicit gestures. Quek et al.’s research has laid a foundation of ideas related to gestures as a modality as well as presented graphs that visualize the statistics of user inclination towards performing different types and attributes of gestures. However, their research pertained to many application domains, without focusing only on and representing free-space interfaces. Although Vafaei presents various statistics on gestural attributes employed by participants in mid-air interfaces, his list of various commands given to the participants was based off of videos, which elicited gestures for locating cursors and affected a result of dominantly abstract and manipulative gestures.

The current paper explores the change in the nature of user-defined gestures and emphasizes areas of user performance on communicative gestures as opposed to manipulative gestures.

Technological Biases

When conducting gesture elicitation studies, a common concern is the tendency for users to bring over habits and experiences of interacting with touch-based devices and computers (Morris et al., 2014). These habits, referred to as “legacy bias” are usually in forms of gestures that control objects on screen and require touching the device’s surface. Morris et al. (2014) write that:

[…] legacy bias limits the potential of user elicitation methodologies for producing interactions that take full advantage of the possibilities and requirements of emerging application domains, form factors, and sensing capabilities (p. 44).

Because mid-air gestural interfaces require different interactive approaches, the studies would not gain much from users providing gestures that have been inspired by interacting with existing devices. Such is the case with the study by Morris et al., in which the authors noted that participants were likely to use pointing gestures for selection and swiping gestures for scrolling or paging. They encouraged creativity to mitigate the participants’ biases. Interestingly, Oh and Findlater (2013) report in their studies that users tended to create and perform gestures with familiar attributes that were used in existing commercial products, in spite of being informed to define creative and new gestures.

There remains a strong user bias towards single finger and single hand gestures (Grijincu et al. 2014). Grijincu et al. state that users were heavily biased towards choosing simple hand postures that tended to be unchanging and with the hands moving in simple non-segmented and open paths. In the Epps et al. (2006) study for tabletop interfaces, they perceived that users have a common tendency to use their index fingers as gestures for various tasks. Although their studies investigated gestures for touch-based interfaces, their notes reveal their participants’ constraint to using familiar gestures from experiences with previous systems.

The problem with the bias present in gesture elicitation studies may be due to the fact that users do not have a clear and familiar mental model of the applications. Vafaei (2013) observed that there were a high rate of abstract gestures for “zoom in/out” and “undo” tasks because of the lack of users’ familiarity with using gestures for such commands. A few studies report that user familiarity with particular interfaces contributed to memorability of gestures. Bhuiyan and Picking (2011), based on their study, suggested that gesture technologies can be combined with everyday objects to increase user familiarly and intuition. In their gesture elicitation studies, to avoid users forgetting particular actions, Nacenta et al. (2013) purposely created three user- familiar frameworks for which their participants perform gestures: image editing; web browsing; and word processing. Presenting those respective frameworks, they expected their participants would not confuse computational results by not being able to remember the actions.

In any case, biases may limit gesture elicitation methods and restrict interactions that may be better suited for a given interface than those that users have used before for existing interfaces (Morris et al. 2014). To minimize legacy bias, in their study, they repeatedly encouraged their participants to create as many gestures as they could, until each participant suggested between five and nine gestures. In studying user preference for mid-air gestures, Aigner et al. (2012) avoided technical biases by recruiting participants from a variety of backgrounds, from software developers to schoolteachers.

The above studies present vital results and tackle issues with respects to bias. However, the methods and observations of these studies can be taken altogether into consideration to mitigate bias. The current paper thus seeks to encourage users’ creativity, restrict them to defining free-space gestures (excluding voice and touch-based commands), and emphasize their natural communication with imagined, intelligent gesture recognition systems.

Gesture Elicitation Methods

Many HCI studies on gestures focus on designing productive and efficient elicitation methods that would encourage users to produce appropriate gestures. As such, in examining and determining a set of gestures that users may easily and intuitively perform, the method of eliciting gestures and ensuring users’ understanding of their performance in context are significant. Many studies adopt a common gesture elicitation method, in which participants define and perform gestures for tasks after watching videos of results that would happen when completing the tasks. To elicit gestures, Piumsomboon et al. (2013) presented the effect of a task shown by a 3D animation in the context of augmented reality, and then asked the participants to design and perform gestures. In such studies, participants were also asked to think-aloud while designing the gestures. The studies also asked the participants to repeat their gestures three times before finally rating their gestures on a two 7-point Likert scales of goodness and ease-of-use.

In Morris et al. (2010) studies on user preference for a simulated surface interface, they first showed participants several video tutorials of an actor demonstrating respective commands. For each video, after viewing, participants emulated the actor’s gesture and then rated the gesture (either designer-defined or user-defined) on a 7-point Likert scale. After watching all the videos, performing, and rating the gestures, the participants were presented six alternative gestures and asked to choose the most appropriate gesture for a particular command. This process was continued for all twenty-two gestures that were available and resulted in a statistics of ranked gestures for different commands.

Unlike the previous aforementioned studies, Aigner et al. (2012) used a different method in their investigation for eliciting and understanding gestures in relations to communicative types. The studies employed a Wizard-of-Oz method, where a “wizard” is a person observing gestures conveyed and performed by an “actor,” who was a participant. The two people were spatially separated but could view the action of the other through a video screen. In consequence, many participants displayed unanticipated behaviors, using iconic and deictic gestures more often than manipulative gestures, which the researchers had expected to be more present. The studies revealed that direct manipulation might not be a not suitable type of gesture as a command, and interfaces would benefit most from users using gestures that better map to users’ expectation.

In Morris et al. (2014), the researchers primed participants before the gesture elicitation production phase of the study. The study assigned each participant to one of the three categories of priming: video priming, no priming, and video and kinesthetic priming. For instance, in video priming, the participants were shown videos of different gestures performed in different settings and asked to perform gestures, while in the kinesthetic priming, they were asked to perform fifteen specific gestures (e.g. moving around and acting like an elephant). In the studies with no priming, participants were not shown any video and asked to perform more than one gesture. This had helped to encourage participants to think and move away from the mouse-click and/or touch-based paradigms. In exploration of users’ preference for faceted browsing, users had the tendency to tailor their gestures to the level of specificity of the commands. The study suggests that methods with their specific data types and sizes may lead to producing task-specific gestures.

The reviewed studies are valuable examples of different gesture elicitation methods that have been used to investigate user-defined gestures and how users are inclined to interaction with interfaces in particular contexts. Unlike the studies (especially that of Piumsomboon et al. and of Morris et al.), the current paper, while using videos to display example actions, employs a different gesture elicitation method, in which participants are required to repeatedly and consecutively perform their gestures for over a span of one to two minutes altogether. Such method examines changes in user preference from defining stage to practicing, as well as what gestures put little stress on users and are preferred by them.

Based on the review of the literature, it is apparent that there is a current gap in knowledge—that is, the relationship between physiological stress and user’s preference for gestures over time. Thus, the current thesis focuses on finding the correlation of both stress and user’s preference as well as the consistency of user’s preference and the influences on the gesture creation and performance. Thus, the next following sections present the investigation of these factors and the results.

METHODOLOGY

As both qualitative research and quantitative research, the thesis investigates factors that may influence user’s preference towards certain tasks and gestures during long-term performance: classifiable types of gesture; tasks presented to users; and the amount of stress on the users, who respond to each task with gestures. To understand the relevance and interrelationship of these factors, there are several variables that the study considers: user’s preference towards specific types of gestures as well as tasks, changes in user’s preference over time, the impact of tasks on gesture definition, and the level of physical stress of each gesture. The current thesis also classifies gestures into types (i.e. deictic, iconic, pantomimic, and symbolic) to further understand how stress influences the types of gestures users choose to perform. To examine user preference and changes in preference, participants were presented with a set of questionnaires asking them to rank their gestures and provide feedback on how they feel about the tasks given to them. These surveys were analyzed to determine the types of gestures that are most suitable for various task. To determine the health and efficiency of gestures, the amount of stress that proceeded from each gesture was monitored through a Q sensor in microSiemens (μS). The data provided by the Q sensor will be compared to individual attributes of the gestures performed in a future study. However, from this data, a set of low-stress and user preferred gestures were found.

Setting and Participants

The studies were conducted in a room, where a computer monitor was placed on a three- foot tall table and represented an interactive display. The surrounding area was large enough for a person to move around in all directions. For recording and validity of data, two cameras were set up to document the participants’ gestures: one angularly positioned in front and another behind individual participant. Fifteen participants (ten females and five males) between the age of 18 and 22 from various academic disciplines were recruited for the study, to ensure examination of users with different technical backgrounds. Each study lasted around 60-80 minutes.

Instrumentation

To measure accurate physical stress, a Q sensor was worn on the dominant hand of each participant. The Q sensor has proven to be very intuitive and robust, as it is lightweight and measures electrodermal activity (EDA)—the changes in electrical resistance on skin based on the amount of skin conductance influenced by mental and physical activities. Thus, it can read stress, emotions, and skin conductance. Physiological stress can be measured, although not exclusively, by observing the amount of activities that causes skin stimuli (Villarejo, et al. 2012). It also contains an accelerometer, which provides information regarding the amount of participant movement while suggesting gestures.

Tasks for the Participants

The participants were tasked with creating gestures for following actions:

1. Activate the sensor (or to start communicating with devices such as “waking the screen”);
2. Navigation (paging);
3. Scrolling
4. Select the object to perform the action (selection);
5. Undo the action (undo command);
6. Return to previous page.

Procedure

The study started out by providing participants with a brief introduction, asking them to imagine being inside a furniture store, where the individual would encounter an interactive kiosk. After the instructions, the participants were asked to stand and interact in front of the computer monitor, imagining the monitor possessed many capabilities (i.e. advanced camera sensor that read gestures like a human). They were also required to imagine that their gestures were readable and communicative to the monitor. Before beginning their performance, the participants wore the Q sensor on their dominant hand to log the physical stress of gestures.

Each participant was shown six 2D animated videos that were respectively played in a loop to demonstrate the computational effects of each command. The commands were activation, selection, scrolling, paging, undo, and return to main menu. For each video or task, the participants created and performed at least four gestures that would be appropriate, communicative and intuitive to achieve the effect of the command. A list of their gestures was recorded at the top of a paper survey. They then took the paper survey to reflect their ranking of gestures and their reason(s) for the ranking as well as what they felt about the task they were given (see Appendix A). Subsequently, they consecutively repeated all of their gestures for ten seconds each. This process allowed the individuals to practice their gestures as if they were continually interacting with the display, and evaluate the appropriateness, ease, and maneuvering of each of their gestures. After, the participants would take the same paper survey to reflect changes to their initial ranking without the presence of the first survey. The participants would be told the list of gestures if they had forgotten a particular one, assuming they would remember their gestures after repetitive performance. This restraint had allowed the participants to rank their gestures without being influenced by their answers in the first survey.

Data Processing and Analysis

During an analysis of the videos of the participants, each gesture was recorded and coded in a spreadsheet, broken down into specific categories: duration of gesture performances (in seconds); types of gesture; name of gestures performed; body parts used; stress (μS) of the gesture; rank number in the first survey and in the second one; and reasons for ranking.

As the Q sensor presented timed logs with an output in statistics of the stress level taken from participants while performing individual gesture, videos of the participants were corresponded to the logs to extract the average μS that proceeded from various gestures.

The surveys given to each participant before and after each video are a major indication of user preference and reasons for defining their gestures. There are six sections to a survey that had asked the participants based on: 1) ranking, 2) reasons for their ranking, 3) list of challenging gestures, 4) what they liked about a task, 5) what they disliked about a task, and 6) whether they were stressed out.

Additionally, results presented in the next section were derived by organizing data in specific orders and using several analysis methods: percentage of change and Pearson product- moment correlation coefficient. In finding the results of user ranking, all gestures defined by each participant were ordered into a separate data set and ranked from one to the maximum number available in that set (i.e. the total number of gestures created for a particular command). Doing so had allowed an observation of each data set for patterns seen in the ranking of individual participants. Participants’ rankings from both the first and second surveys were grouped separately as well, but closely linked with the intention of perceiving changes in the rankings between the two surveys.

The changes from the first to second rankings were computed by determining the number of changes in the rank of the user-defined gestures. The correlation between user rankings of gestures and types of gestures were computed by the Pearson correlation. Each gesture was also labeled with an appropriate type, such that a count of the number of types of gestures that occurred for each rank (from 1 to 7) could be found for the correlation.

To identify user-defined gestures that are both low stress and most preferred by users, all gestures that had a rank of one or two were grouped into one set for both first and surveys. Then, from this group, unique gestures were distinguished and counted for the number of times they were defined by the participants.

Validity

To ensure that user-defined gestures were not influenced by the order of videos shown, participants were presented one of five video sequences, which consisted of the same videos but in different orders. For each participant, the video that demonstrated the effect of achieving activation (i.e. “wake up” the display) was always shown first. This gave the participants time to become familiar with creating and performing gestures for the first time, such that for later tasks, they would not be confused or misunderstand what they needed to do.

RESULTS

The results of this thesis are made up of different data visualization and analysis to discover the relationship, if any, between stress and user preference towards gestures defined for various computational tasks. As physical stress is a relevant factor in determining the usability of any interfaces, the current thesis seeks to understand how it underlies the influences of user preference towards gestures and their types, as well as how it affects the suitability of user- defined gestures for a particular task or command. The hypothesis assumed before the studies is that there is a negative correlation between user ranking and low stress gestures as well as a connection between user preference and types of gestures. One thing to note, due to the time- consuming nature of extracting data from the Q sensor, measurements would be further scrutinized and used in a future research to largely see the amount of stress placed on users when performing a particular gesture. However, other factors, such as user preference, nature of gestures, and influences of tasks on gesture creation, were examined.

For the activation command, all fifteen participants came up with at least three gestures and a maximum of seven gestures as commands to activate (“turn on” or “start”) a free-space interface (see Appendix B, Figure 1). Two participants were repeatedly encouraged to come up with more than three gestures but had difficulty in the beginning with this task. As mentioned earlier, the activation command was the first command that all participants were tasked with defining gestures for. This had allowed them time to understand their role as a user of a free- space interface and minimize any further misunderstanding of their task for subsequent commands. Similar to the activation command, participants came up with a varying number of gestures for scroll. However, as seen with page, selection, undo, and return commands, they all came up with the expected number of gestures—that is, at least four gestures to scroll (“navigate the page”) through the content of the interface.

More details on the data analysis are presented in the next section. The followings are a summary and highlights of significant factors and responses of participants for each command.

Consistency in User’s Preference For Gestures

Across all commands, users often change the rankings of their gestures from the first surveys in the second surveys. Table 1 shows the percentage of changes for each command from the majority, if not all, of the participants.

illustration not visible in this excerpt

Table 1. Table of changes in rankings from the first to second surveys in percentage

The highest change is seen in the activation command (62.9%) (see Appendix B, Figure 2). Appendix B, Figure 2 represents a line graph of gestures ranked immediately after the participants created and performed them. In rare cases, a few participants would rank more than one gesture with the same number. In this task, a participant (P6) gave a rank of two to three gestures they had defined. This was not due to their lack of understanding of the task but rather they preferred those gestures equally. Overall, there was a drastic difference between the first ranking of gestures and the second for the activation command. A similar significant change in participants’ ranking after physically preforming their gestures was commonly seen across different commands.

Compared to the other commands, the page command reflected the least change in gestural ranking from the first to the second surveys. Appendix D, Figure 15 shows various patterns of rankings across all participants for the command. Interestingly, two participants showed either a straight ascending or descending rankings of gestures in order of creation (see P1 and P12). Appendix D, Figure 16 visualizes similar line patterns to Figure 15. The participants may have believed that their firstly created gestures were more favorable to them than the subsequent one, as they were encouraged to come up with more gestures. This may have been due to a few natural and comfortable gestures that participants could come up with for “going to the next page” or “navigating through the content.”

Relationship Between Stress And User’s Preference

The stress numbers from the participants’ performance of gestures would be examined in a future study. However, user-defined gestures are found to be based majorly on comfort, intuitiveness, and ease of communication with devices (see Table 7), with increased responses for comfort in the second surveys (see Table 8).

illustration not visible in this excerpt

Table 7. Table showing the percentage of responses for reasons for users’ ranking.

illustration not visible in this excerpt

Table 8. Table showing the changes in reasons from participants in percentage, where negative percentage signifies a decrease in overall responses from participants in that particular category, and a positive percentage denotes an increase in responses.

Based on the first survey responses for activation command, a majority of participants selected comfort and intuitiveness as their reasons for ranking their gestures. Where in the first graph, the ranking of the gestures were dominantly based on comfort (28.6%), intuitiveness (34.3%), and ease of gestures for human-computer communication (25.7%), the second graph portrays a significant change in responses for comfort (40.6%) and a 9.3% decrease in selection of intuitiveness. This may be due to some users later finding their own gestures unnatural and forced.

Responses of the first surveys show a result of individual participants ranking each of their defined scroll gestures (see Appendix C, Figure 1). The first gestures that the participants defined were ranked higher than subsequent ones, which were ranked lower. There was a 47.2% change in the rankings from the first to the second surveys (see Appendix C, Figure 2). Similarly to the correlation between the rankings of the gestures, the participants’ reasons for selecting the ranking was not much different from the first to the second surveys. However, that is not to say that there was no change. There is a 7.1% increase in selection for comfort as the underlying factor for a majority of the participants’ ranking.

In undo command, user responses for intuitiveness increased 12% in the second surveys, while the other responses were more or less the same.

Correlation Between User’s Preference And Types of Gestures

On an average, there was a weak negative correlation between the rankings of gestures and the frequency of gesture types. Table 2 presents average correlation coefficients of gesture type and ranking across all commands.

illustration not visible in this excerpt

Table 2. Average Correlations Between Types and the Ranks of Gestures from Both Surveys

From the rankings in the first surveys, there was an overall weak negative correlation (r = -0.2931, n = 439) computed using Pearson correlation. This was similar in those of the second surveys (r = -0.2322, n = 439). The correlation was seen with the majority of the types, such as iconic and manipulative gestures. This signifies that gestures ranked higher were occassionally of these types; and specifically, these types were more populated in middle ranks (from 2 to 4). However, there was a strong negativate correlation (r = -0.9155, n = 162) between user’s ranking and deictic gestures, which denotes that gestures that were ranked higher were frequently of deicitic types.

Influences of Commands on Gestures

Looking at only user-preferred gestures is not enough to understand factors, such as perception and user’s understanding, that have contributed to the way the participants defined their gestures. Thus, the surveys given to the participants had also asked for their reasons for perceiving, liking, and disliking a task or command. They had the option to select all that apply to or not select any. Table 3 illustrates a result of responses across all commands.

illustration not visible in this excerpt

Table 3. Table showing percentage of overall responses from participants on liking a particular command, for each command that they created gestures for.

Across all commands, the following table shows the changes in the responses from the first to the second (in percentage):

illustration not visible in this excerpt

Table 4. Table showing the changes in responses in percentage from participants on liking a particular command. Negative percentage indicates a decrease in overall responses from participants in that particular category, whereas a positive percentage denotes an increase in responses.

illustration not visible in this excerpt

Table 5. Table showing the percentage of responses from participants disliking a particular command.

illustration not visible in this excerpt

Table 6. Table showing the changes in responses from participants disliking a particular command (in percentage) across all participants. Negative percentage denotes a decrease in overall responses from participants in that particular category, whereas a positive percentage denotes an increase in responses.

Across all commands, a majority of participants selected “others,” where a few specified other dissatisfaction with creating gestures for a command; while many wrote “none” or “nothing” to indicate that they had no reason for disliking the specified command.

In the first surveys for the activation command, about 30% of the participants found the activation task “easy to understand.” 29.6% of responses were participants finding the task similar to that in existing technologies they had used before. 60% of participants found the task “easy to perform” gesture for. Interestingly, the results from the second survey depict an increase in the first three reasons (for more visualization, see Appendix B, Figure 6). There was a 6% increase in the selection of the task being easy to perform. On the other hand, participants displayed minor dislikes of the commands they were asked to create gestures for. Before extensively repeating their gestures, few users found the activation task to be confusing (31.3%). However, Table 6 represents a change in users’ dislikes from the first to second surveys (see also Appendix B, Figure 7). Two participants believed the task was either unsuitable for gesture- based interaction or stressful to create gestures for. However, these responses were not observable in the results of the second survey. In addition, there was an 11% decrease in users finding the task confusing. Two different participants found the task to be unrealistic for gestural interaction.

For scrolling command, as seen in Table 5 and 6, the responses of liking the command were similar to the activation command and resulted in a slight change from the first to second surveys. After repeatedly practicing their gestures, participants were consistent in their reasons. This implies that after repeatedly practicing their gestures, participants were consistent in their reasons. In the first survey, 44% of the selected responses were reasons that the participants did not like creating gestures for scroll command, whereas the rest was either “others” or unanswered. However, only 27% of responses remained of these reasons in the second survey, which signifies a 17% increase in responses that were of “others” or no response. Again, a majority of the response for “others” was simply that the participants had no particular dislike.

The paging command had more responses from participants liking the command in the first three categories (i.e. “easy to understand,” “easy to perform” and “similar to existing tech.”) with few “others” or unanswered responses. On the contrary, in the first survey, some participants felt that the gestures they created were unsuitable for the page command, unrealistic, and confusing. In the second survey, the responses in these categories dramatically decrease, leaving more than 83% of responses as either “others” or unanswered.

Similar to the changes in the previous commands, there was a 6 to 13% increase in dislikes in the first four categories for selection and undo commands, respectively. Initially, the responses were few in percentage in the first surveys, but the slight increase in the second surveys implies some dissatisfaction of the appropriateness of gesture creations for the commands. Moreover, compared to the small percentage of dislikes from participants, participants’ responses were overall high in liking for the two commands. In fact, undo command had the highest liking in the first two categories out of all the commands. Comparable to undo command, the return command had also high percentages of liking in the first two categories.

Association Between The Appropriateness of Tasks And Stress

Given the results from users’ responses to particular reasons for ranking their gestures, the majority of responses in both first and second surveys was on average around 28.0 to 29.3% for comfort; 30.8 to 32.0% for intuitiveness; and 27.8% for ease of communication with the freespace display. Gestures can be assumed to be significantly based on these reasons. Thus, how appropriate a task is based on the responses from the participants on likes and dislikes. In association, appropriate tasks and gestures that are presumed to be both intuitive and comfortable may be connected but in regards to the results, there was not an apparent pattern that links the suitability of a task and comfortable gestures.

Challenging Gestures

Based on the user surveys, collected responses show a list of gestures that the participants chose as challenging, when asked to imagine which of their gestures were the most difficult to perform for an hour of repetitive uses (see Appendix H). 31.9% of gestures defined across all participants were listed as most challenging to repetitively perform within an hour of usage. Popular gestures, such as circle motion, clap, double taps, form shape or letter, draw shape or letter, wave and swipe, were listed among other apparently stressful ones (e.g. jump, point, lifting limbs out, shake head or body, snap, walk and spin around).

Low Stress Gestures

Out of all the gestures, user-preferred gestures ranked either one and two were examined. This helps to determine gestures that were commonly preferred across all participants. As gestures are based on reasons such as comfort and intuitiveness, the gestures are assumed to low stress and natural for the participants. Gestures can be assumed to be largely based on these reasons. The data tables (see Appendix B, Figure 5 and 6) show a list of the top two most preferred activation-based gestures by all fifteen participants. The gestures were chosen based on uniqueness and had either a ranking of one or two. In Appendix B, Figure 5, the gesture, “circle limb” represents the motion of revolving a body part (e.g. arm, wrist, leg) 360-degree. The “limb out” gesture is the act of sticking out a hand, arm, or leg. In the first survey, 43.3% of user- defined gestures were respectively chosen by one participant; whereas, 57% of gestures have at least more than two participants who similarly selected these gestures. In the second survey, 45.2% of user-defined gestures were picked by one respective participant while there were at least two participants for 55% of the gestures. For a list of all unique gestures created by participants for this task, please see Appendix B, Figure 8.

For the scroll command, Appendix C, Figure 12 reveals a frequency list of unique scroll gestures that the participants have ranked as one and two. 70% of the gestures within the list were generally preferred by more than two participants, while the other 30% were solely preferred by one respective participant. Swipe and slide (including both swipe up/down and slide up/down) make up 56% of the list. Unlike the gesture list from the first survey, Figure 13 represents a list of more unique gestures from the second survey. However, like the gestures in the first survey, 31% of all the gestures were preferred by one individual participant; whereas, 69% were commonly favorable to at least two participants with slide and swipe remaining most preferred out of the whole list.

In the page command, Appendix D, Figure 18 reveals a striking pattern in the list of unique gestures created and most preferred by individual participants. 55% of gestures only had one response from various participants, while the rest were selected by at least more than two participants. The most popularly chosen gesture is a simple swipe left/right, which has 32% of the entire responses. Comparably to Figure 18, Figure 19 exhibits 58% of unique gestures that had one response each while 42% of the gestures were largely preferred by many participants. The gesture selected by more participants is still “swipe left/right” without any specific body parts used; it holds 35% of responses. For the paging command, more than half of the unique gestures were chosen by only one distinct participant. This indicates that a majority of user- defined gestures vary from one user to another in terms of user preference and inclination.

Additionally, the most popular gesture was “swipe left/right” for which participants used their hand and/or arm, and only had around 30% of selections—less than the number of gestures those individual participants chosen.

Interestingly, the selection command had fewer unique gestures compared to the other commands (see Appendix E, Figure 27 and 28). This signifies that it was much more difficult for users to create many gestures that communicate “select” and were both comfortable and intuitive for them. In the first surveys, point, tap, and hold up numbers were frequent gestures ranked similarly by several participants. Grab and draw circle gestures, although not frequent, had two participants that ranked them. These three gestures were commonly ranked by more than one participant in the second surveys. Yet, together they did not overcome the entire data sets, making up only 50% of all gestures in both surveys.

The undo command presents a variety of gestures defined across all participants (see Appendix F, Figure 31 and 32). Resembling the results of the activation and scroll commands, a majority of the gestures (53.3%) were commonly preferred by more than one participant. The gestures were wave, shake head, draw x, and form x with fingers/arms. The second surveys also reflected the similar gestures but frequent gestures (identical to the first survey) made up a smaller percentage (45%) than gestures that were only preferred by one individual.

Similar to the undo command, the return command shows a range of unique gestures collectively from all participants (see Appendix G, Figure 37 and 38), but commonly ranked gestures constituted around 43.3% (less than gestures preferred by one individual participant) in the first surveys. These gestures were double taps, form home with hands, pinch, point, swipe, and tap. The second surveys were similar to the first, where these frequent gestures together covered only 40% overall.

DISCUSSION

After analysis, there is a trend of users to change their rankings and reasons for the rankings as reflected in the second survey. After repeating their gestures quite extensively, participants were more likely to select comfort, intuitiveness, and ease of communication as their reasons for ranking their gestures. This may be due to the practicality of the participants to choose gestures that were more comfortable than both intuitive and easy for human-computer communication. This also reveals that although gestures that were, at first glance, instinctive, simple, and suitable for interaction, users later preferred gestures that were mostly comfortable while, to some extent, intuitive and easy for interaction with free-space interfaces. Thus, these gestures can be inferred to be low stress, natural to perform, and appropriate for free-space interaction. Across all commands, the following gestures were most frequently defined and highly ranked: single or double taps, single or double claps, circle arm, slide, wave, swipe, point, hold up numbers, draw in mid-air, and form a shape. Additionally, a somewhat negative association between stress and user’s preference for gestures is assumed. Although actual physical stress was not examined from the Q sensor, there were regardless a large number of selected responses for comfort and intuitiveness, which serves as an indication of how the gestures were ranked. Higher responses for ranking gestures based on comfort implied that highly ranked gestures were the least stressful compared to the other ones.

There is a recurring pattern among unique gestures that were highly ranked either one or two. More or less, commonly ranked user-defined gestures made up notably less than gestures that were preferred by only one participant (between 43 to 50%). In other words, popular gestures consisted of less than half of all gestures proposed by users. This may indicate that users were more likely to create gestures that were personalized and objectified for themselves, as well as felt that these gestures were comfortable or intuitive to them. Nonetheless, this pattern demonstrates that an overall intuitive gesture language can be difficult to define for all users, unless only popular gestures were examined—as seen in this thesis, where only gestures ranked one or two were grouped together as they were considered most significant among participants. Therefore, it is important to consider what gestures are meaningful and relevant to users, who will be using them as a means for interaction with devices.

Across all commands, there were a weak negative correlation between user rankings of gestures and the types. Particularly, deictic gesture types had a strong negative correlation with user ranking, which denotes that users were more likely to rank deictic gestures either one or two. This correlation was evident in the selection command, where “select” gestures instinctively meant using deictic types of gestures as the command inherently deals with the position on screen. This correlation was most likely due to the users’ technological bias of using devices integrated with touch screens, where interaction was broadly through the touch and movement of the index finger. Interestingly, individual gestures that consist of a sequence of gestures (e.g. grab and then walk away gesture) had a weak to almost no relationship with user ranking. These gestures were few in numbers and seen to be ranked from anywhere from one to seven without any specific pattern associated. Given this fact, users are more inclined to simple gestures that are not sequential or physically obstructive or limiting to the body. Nonetheless, recognizing types of gestures is significant in designing a well-rounded, intuitive gesture language for free- space interaction.

From the list of challenging gestures (see Appendix H), both popularly defined gestures and physically complex ones were listed. These gestures all require movements that either cause imbalance or stretching, using the extent of the arm or stationary parts of the body, such as head and legs. As noted before, Kolsch et al. (2003) write that comfortable bodily movements engage minimal trunk motion while allowing the trunk freedom to move. An example of uncomfortable movement is lifting one’s leg out and moving it above the floor to indicate selection, where the body does not have the liberty to move. Lenman et al. (2002) state that hand or arm movements without support are more likely to cause fatigue. In the results section, it appears that a majority of the gestures listed as challenging gave the participants little freedom to move other parts of their body and could not be supported for mid-air interaction. Thus, challenging gestures can be assumed to be any of such gestures that would be physically straining to perform for a long duration of time without a feasible limb support or bodily balance.

In regards to the results, it is recommended that designers should build a gesture language that includes the aforementioned gestures (e.g. single or double taps, single or double claps, circle arm, slide, wave, swipe, point, hold up numbers, draw in mid-air, and form a shape), as the level of user preference and satisfaction can greatly affect the length of usage of a device and the longevity of the device in the marketplace. It is also advisable for designers to understand the inconsistency of user preference for gestures and the commands that they define for. Users are more likely to change the gestures they have preferred at one point in the beginning to ones they found likeable after a period of using those gestures extensively. As with anything, users are most capricious and disloyal to devices they feel are not satisfying their needs or expectation.

CONCLUSION

With new advances in the field of human-computer interaction, such as in-the-air gesture recognition systems and brain interfaces, the thesis identifies influences of factors, such as user preference and nature of tasks, on the health of general users. From the results and discussion, users are seen to become more familiar with the gestures they defined after repeatedly performing them. This is because they were more inclined to make a more well-rounded observation of the physical activity of their gestures. User preference, as mentioned before, has a tendency to change after a period of interaction with free-space interfaces. Thus, one must consider a design a set of gestures that are consistently preferred and minimally stressful to users. Discovering low stress gestures that are also preferred by and intuitive to users has been a difficult task. Thus, data from the Q sensor and other factors, such as which body part was used and duration of the gestures, will be analyzed and determined in the future. In addition, investigating the correlation between stress and the types of gestures remain a beneficial aspect to consider for free-space interface, but this thesis cannot fully provide an in-depth analysis of this aspect yet. This research question will also be pursued in a future research study. The larger question is whether, compared to touch-based and mouse-driven devices, extensive gesture-driven interfaces would become a viable and suitable alternative. In many cases, to ensure the possibility of users interacting designer confront these stress-related challenges and understand user preference. Thus, it is important to see that human adaptability to new devices and their functions have spun an unstoppable and increasingly important incentive to build and test the reliability of limited human computer interaction features.

REFERENCES

Aigner, Roland, Daniel Wigdor, Hrvoje Benko, Michael Haller, David Lindlbauer, Alexandra Ion, Shengdong Zhao, Jeffrey Tzu Kwan Valino Koh. (2012, November). Understanding Mid-Air Hand Gestures: A Study of Human Preferences in Usage of Gesture Types for HCI. Microsoft Research. Retrieved from http://research.microsoft.com.

Athavale, Swapnil and Mona Deshmukh. (2014). Dynamic Hand Gesture Recognition for Human Computer interaction: A Comparative Study. International Journal of Engineering Research and General Science, 2, 38-55. Retrieved from http://ijergs.org.

Bhuiyan, Moniruzzaman, and Rich Picking. (2011). A Gesture Controlled User Interface for Inclusive Design and Evaluative Study of Its Usability. Journal of Software Engineering and Applications 4, 513-521. Retrieved from http://www.scirp.org.

Cabral, Marcio C., Carlos H. Morimoto, and Marcelo Zuffo. (2005, October). On the Usability of Gesture Interfaces in Virtual Reality Environments. Paper presented at the Latin American Conference on Human-Computer Interaction, New York.

Elmezain, Mahmoud, Ayoub Al-Hamadi, Omer Rashid and Bernd Michaelis. (2009). Posture and Gesture Recognition for Human-Computer Interaction In Advanced Technologies (pp 415-441) . Retrieved from: http://www.intechopen.com.

Epps, Julien, Serge Lichman, and Mike Wu. (2006). Proceeding CHI '06: A Study Of Hand Shape Use In Tabletop Gesture Interaction. New York, NY: ACM.

Farhadi-Niaki, Farzin, S. Ali Etemad, and Ali Arya. (2013). Design and Usability Analysis of Gesture-based Control for Common Desktop Tasks. Interaction Modalities and Techniques, 8007, 215-224. Retrieved from http://link.springer.com.

Grijincu, Daniela, Miguel Nacenta, and Per Ola Kristensson. (2014). Proceeding CHI '06: User-defined Interface Gestures: Dataset and Analysis. New York, NY: ACM.

Jary, Simon. (2015, Feb. 15). How Much Screen Time is Healthy For Children? Health Risks of Too Much Kid Screen Time. PC Advisor. Retrieved from http://www.pcadvisor.co.uk/.

Karam, Maria and M.C. Schraefel. (2005). A Taxonomy of Gestures In Human-Computer Interaction . Transactions on Computer-Human Interactions, 1-45. Retrieved from http://eprints.soton.ac.uk.

Mathias Kölsch, Andrew C. Beall, and Matthew Turk. (2003). The Postural Comfort Zone For Reaching Gestures, CiteSeer. Retrieved from http://citeseerx.ist.psu.edu.

Kortum, Philip. (2008). HCI Beyond the GUI: Design for Haptic, Speech, Olfactory, and Other Nontraditional Interfaces. Retrieved from http://dsyc.uaemex.mx.

Lenman, Sören, Lars Bretzner, and Björn Thuresson. (2002). Proceeding from second Nordic conference: Using Marking Menus to Develop Command Sets for Computer Vision Based Hand Gesture Interfaces. NordiChi.

Lin, Dyi-Yih Michael and Hsieh-Ching Chen. (2010). Ergonomics for All: Celebrating PPCOE's 20 years of Excellence: Selected Papers of the Pan-Pacific Conference on Ergonomics. Retrieved from https://books.google.com.

Lindeberg, Tony, Yngve Sundblad, and Lars Bretzner. Computer Vision Based Human- Computer Interaction, CSC. Retrieved from http://www.nada.kth.se/cvap/gvmdi/.

Morris, M.R., Huang, A., Paepcke A., and Winograd, T. Proceeding CHI '06: User-defined Interface Gestures: Cooperative Gestures: Multi-User Gestural Interactions for Co-Located Groupware. New York, NY: ACM.

Morris, Meredith Ringel, Andreea Danielescu, Steven Drucker, Danyel Fisher, Bongshin Lee, m.c. schraefel, and Jacob O. Wobbrock. (2014). Reducing Legacy Bias in Gesture Elicitation Studies. Interactions, 21, 40-45. Retrieved from http://interactions.acm.org.

Morris, Meredith Ringel, Jacob O. Wobbrock, and Andrew D. Wilson. (2010). Proceeding GI '10: Understanding Users' Preferences for Surface Gestures. Toronto, Ont., Canada: ACM.

Moscovich, T. and Hughes, J. F. Proceeding GI '06: Multi-finger cursor techniques. Toronto, Ont., Canada: ACM.

Murphy, Lawrence R. and Theodore F. Schoenborn. (1987). Stress Management in Work Setting. Retrieved from www.cdc.gov.

Nacenta, Miguel A., Yemliha Kamber, Yizhou Qiang and Per Ola Kristensson. (2013). Proceedings of the SIGCHI: Memorability of Pre-designed & User-defined Gesture Sets. New York, NY: ACM.

Nielsen, M., Störring, M., Moeslund, T.B., and Granum, E. 2003. A Procedure For Developing Intuitive and Ergonomic Gesture Interfaces For HCI. Gesture-Based Communication in Human-Computer Interaction, 2915, 409-420. Retrieved from http://link.springer.com.

Oh, U. and L. Findlater. (2013). Proceedings of the CHI '13: The Challenges And Potential Of End-User Gesture Customization. New York, NY: ACM.

Pavlovic, Vladimir, Rajeev Sharma, and Thomas S. Huang. (1997). Visual Interpretation Of Hand Gestures For Human-Computer Interaction: A Review. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19, 677-695. Retrieved from http://ieeexplore.ieee.org.

Piumsomboon, Thammathip, Adrian Clark, Mark Billinghurst, and Andy Cockburn. (2013). Proceeding CHI EA '13: User-Defined Gestures for Augmented Reality. New York, NY: ACM.

Quek, F., D. McNeill, R. Bryll, S. Duncan, X.-F. Ma, C. Kirbas, K. E. McCullough, and R. Ansari. (2002). Multimodal Human Discourse: Gesture and Speech. ACM Transactions on Computer-Human Interaction, 9, 171-193. Retrieved from http://dl.acm.org.

Truyenque, Michel Alain Quintana. (2005). A Computer Vision Application For Hand-Gestures Human Computer Interaction. Retrieved from http://www.maxwell.vrac.puc-rio.br.

Valdes, Consuelo, Diana Eastman, and Casey Grote. (2014). Proceeding CHI '14: Exploring the Design Space of Gestural Interaction with Active Tokens through User-Defined Gestures. Toronto, Canada: ACM.

Vafaei, Fereydoon. (2013). Taxonomy Of Gestures In Human Computer Interaction. Retrieved from North Dakota State University, Fargo: http://library.ndsu.edu.

Villarejo, María Viqueira and Begoña García Zapirain. (2012). A Stress Sensor Based on Galvanic Skin Response (GSR) Controlled by ZigBee. Sensors, 12.5, 6075-6101. Retrieved from http://www.ncbi.nlm.nih.gov.

Workplace Stress. The American Institute of Stress. Retrieved from http://www.stress.org/workplace-stress.

Zhang, Xu, Xiang Chen, Yun Li, and Vuokko Lantz. (2011). A Framework for Hand Gesture Recognition Based on Accelerometer and EMG Sensors . IEEE Transactions On Systems, Man, And Cybernetics — Part A: Systems And Humans 41, 1064-76. Retrieved from http://ieeexplore.ieee.org.

APPENDICES

Appendix A

Participant Number:

Gesture Elicitation Study - User Survey

Question 1: For this task, please rank your preference for each gesture in an ascending order, with 1 being most preferred.

Question 2: What was the reason or factor for your ranking? Ranking was based on:

a. Comfort
b. Boost in energy or mood
c. Intuitiveness for this task (in other words, suitable and brought to mind first)
d. Creativity
e. Ease of communication with computational devices (easy for computers to recognize gestures)
f. Others:

Question 3: Imagine doing the same gestures repeatedly for an hour. Which of your gestures do you think would be most stressful to you?

Please list them here.

Question 4: What do you like about this task? Please select all that applies to you.

a. Easy to understand
b. Easy to perform gestures for
c. Similar to technologies you have used in the past
d. Others: List other reasons for your preference.

Question 5: What do you dislike about this task?

a. Unsuitable for gestures
b. Unrealistic
c. Stressful
d. Confusing
e. Others: Please list other reasons here.

Question 6: Did you feel physically stressed out doing this task?

a. Yes
b. No
c. Others:

Appendix B

Activation Command

Figure 1. Ranking of Gestures for First Survey of Activation

illustration not visible in this excerpt

Figure 2. Ranking of Gestures for Second Survey of Activation

illustration not visible in this excerpt

Figure 3. Reasons for Ranking Activation Gestures

illustration not visible in this excerpt

Figure 4. All Unique Gestures Ranked 1 and 2 from First Survey

illustration not visible in this excerpt

Figure 5. All Unique Gestures Ranked 1 and 2 from Second Survey

illustration not visible in this excerpt

Figure 6. Reasons for Second Ranking

illustration not visible in this excerpt

Figure 7. Reasons for Disliking Activation Task

illustration not visible in this excerpt

Figure 8. List of Unique Gestures Found for Activation Command

illustration not visible in this excerpt

Appendix C

Scroll Command

Figure 9. First Survey Ranking of Gestures for Scrolling

illustration not visible in this excerpt

Figure 10. Second Survey Ranking of Gestures for Scrolling

illustration not visible in this excerpt

Figure 11. Reasons for Ranking Scrolling Gestures

illustration not visible in this excerpt

Figure 12. Top Two Ranked Scrolling Gestures from First Survey

illustration not visible in this excerpt

Figure 13. Top Two Ranked Scrolling Gestures from Second Survey

illustration not visible in this excerpt

Figure 14. Reasons for Liking Scrolling Task

illustration not visible in this excerpt

Figure 15. Reasons for Disliking Scrolling Task

illustration not visible in this excerpt

Appendix D

Page Command

Figure 16. First Ranking of Paging Gestures

illustration not visible in this excerpt

Figure 17. Second Ranking of Paging Gestures

illustration not visible in this excerpt

Figure 18. Reasons for Ranking Paging Gestures

illustration not visible in this excerpt

Figure 19. Top Two Ranked Paging Gestures from First Survey

illustration not visible in this excerpt

Figure 20. Top Two Ranked Paging Gestures from Second Survey

illustration not visible in this excerpt

Figure 21. Reasons for Liking Paging Task

illustration not visible in this excerpt

Figure 22. Reasons for Disliking Paging Task

illustration not visible in this excerpt

Appendix E

Select Command

Figure 23. First Ranking of Gestures for Selection Task

illustration not visible in this excerpt

Figure 24. Second Ranking of Gestures for Selection Task

illustration not visible in this excerpt

Figure 25. Reasons for Ranking Selection Gestures

illustration not visible in this excerpt

Figure 26. Reasons for Liking the Selection Task

illustration not visible in this excerpt

Figure 27. Reasons for Disliking the Selection Task

illustration not visible in this excerpt

Figure 28. Top Two Ranked Selection Gestures from First Survey

illustration not visible in this excerpt

Figure 29. Top Two Ranked Selection Gestures from Second Survey

illustration not visible in this excerpt

Appendix F

Undo Command

Figure 30. First Ranking of the Undo Task

illustration not visible in this excerpt

Figure 31. Second Ranking of the Undo Task

illustration not visible in this excerpt

Figure 31. Top Two Ranked Undo Gestures from First Survey

illustration not visible in this excerpt

Figure 32. Top Two Ranked Undo Gestures from Second Survey

illustration not visible in this excerpt

Figure 33. Reasons for Ranking Undo Task

illustration not visible in this excerpt

Figure 33. Reasons for Liking Undo Task

illustration not visible in this excerpt

Figure 34. Reasons for Disliking Undo Task

illustration not visible in this excerpt

Appendix G

Return Command

Figure 35. First Ranking of the Return Task

illustration not visible in this excerpt

Figure 36. Second Ranking of the Return Task

illustration not visible in this excerpt

Figure 37. Top Two Ranked Return Gestures from First Survey

illustration not visible in this excerpt

Figure 38. Top Two Ranked Return Gestures from Second Survey

illustration not visible in this excerpt

Figure 39. Reasons for Ranking Return Task

illustration not visible in this excerpt

Figure 40. Reasons for Liking Return Task

illustration not visible in this excerpt

Figure 41. Reasons for Disliking Return Task

illustration not visible in this excerpt

Appendix H

Challenging Gestures

illustration not visible in this excerpt

[...]


1 Robert Aigner, Daniel Wigdor, Hrvoje Benko, and Michael Haller, 2012, Understanding Mid-Air Hand Gestures: A Study of Human Preferences in Usage of Gesture Types for HCI

Excerpt out of 76 pages

Details

Title
Efficient Gestures in Users’ Preference, Health, and Natural Inclination for Non-Touch-Based Interface
College
Arizona State University
Author
Year
2015
Pages
76
Catalog Number
V300163
ISBN (eBook)
9783656977247
ISBN (Book)
9783656977254
File size
1563 KB
Language
English
Keywords
human-computer interaction, user interface, gestures, health, stress, public displays, free-space interaction, mid-air interfaces, computer science
Quote paper
Christine Lam (Author), 2015, Efficient Gestures in Users’ Preference, Health, and Natural Inclination for Non-Touch-Based Interface, Munich, GRIN Verlag, https://www.grin.com/document/300163

Comments

  • No comments yet.
Look inside the ebook
Title: Efficient Gestures in Users’ Preference, Health, and Natural Inclination for Non-Touch-Based Interface



Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free