Positioning Glass: Investigate Display Positions of Monocular Optical See-Through Head-Mounted Display

Overview

The display of Google Glass is placed on the top half of our field of vision. But is that the best position for different scenarios while receiving information from the device? We conducted an experiment to find out, and we found so much more.

google-glass1.jpg

Google Glass’s display is designed to be placed on the top half of our field of vision. But is that the best position for all scenarios? Image courtesy of Google.

Problem Statement

Google Glass opens up a whole new class of wearable use cases, allowing users to glance at information without looking at their phones or smartwatches. Its current design puts the display in the top center position of our field of vision, with its perceived “screen” at a distance away . As the information on the screen (e.g., notifications, directions, etc.) is usually secondary to our primary activity, this design decision seems to make sense. However, we questioned: “Why not at the center? Why not towards the right?”. To answer these questions in the context of modern wearables, we conducted a user study to find out.

article-2322284-19B45114000005DC-524_1024x615_large.jpg

Examples of information presented on the Glass.

Audiences

This project has implications on people wearing and designing augmented reality (AR) glasses such as Google Glass, Epson Moverio, and others. As these devices are used predominantly in a mobile setting, e.g., while walking the dog, transit, running, etc., it is pivotal for users to maintain a good awareness of their surroundings while attending to information on the Glasses. Determining the best position to place its display in our field of vision is a crucial human factor to consider. As smartphone users have surpassed 1 billion, and our eyes being the next logical place (after smartphone and watches) for innovation in the wearable space (think Oculus Rift, Snapchat Spectacles), our work will lay the necessary design foundation for AR displays in the future.

Process

There were four members in this project. I was the research lead, with one software engineer and two research advisors. In the beginning, I was responsible for determining the right research problem in the early stage of the project by reviewing relevant literature published in the last three decades. Following that, I designed and conducted a series of pilot studies to validate and crystallize the research problem. Since the display of the Google Glass is fixed, we had to explore different methods to modify its position while taking important visual factors (visual angle, focal distance) into consideration at the same time (see Constraints). I was involved in brainstorming and implementing the different methods to alter the display positions. The software engineer wrote the Android and Java code for the program used in the study. I conducted the actual experiment and analyzed both the quantitative and qualitative data. From the data, we proposed design guidelines on the best display positions to adopt in different scenarios and context.

Main Findings

We investigated 9 different display positions based on 3 different elevation and azimuth angles (see Figure below). We designed a simulated driving scenario as the primary task and receiving notifications as the secondary task on the Glass.

Figure1.jpg Screenshot 2016-10-23 20.19.19.png

In the left image, the red box on the top left of each illustration represents the display position from the user’s point of view. In the right image, the angles were chosen based on the physical limitations of the Google Glass.

We discovered that while putting the display at the middle center of our vision has the fastest performance, our participants did not prefer the location as it occludes the road while driving in the simulator. We learned that the characteristic of the primary task is a very important factor to consider while determining the best display position, and a lot of mobile scenarios (e.g., walking) have similar characteristics to driving.

Instead of the original top center position prescribed by Google, we found that putting the display at the middle right of our field of vision strikes the best balance between noticeability performance, distraction, and comfort level in our study context. For more data, refer to our paper.

Constraints

When designing the study, we stumbled across a technical constraint in the adjustability of the positions of the Glass’s display. This problem can be solved by removing the display module from its chassis. However, after discussing with the engineer, we realized that it was not possible to disassemble the module without breaking the overall device, and using an alternative HMD was not an option since it would reduce the ecological validity of the study context. Eventually, we discovered that the desired visual angles can be achieved by holding the Glass on-air around the nose bridge. We invented a methodology to do so without affecting the user experience and comfort level of the Glass. To ensure comfort and UX, we conducted a few short pilots with our setup to make sure those two factors were not affected.

Lessons Learned 

The main lesson I learned from this project is that it is very important to define and understand the contexts where the (design) findings can be applied. We made this mistake early in the project, where we didn’t specify the usage contexts where our findings can be appropriately applied, even though we knew that human factors and user experience design are context-dependent. After learning our lesson, we put more effort to understand the contextual applicability of our study data, and this led to a better project in the end.

Advertisements