Exploring the Google Glass UX

As wearable devices enter the mainstream, UX designers must develop ways to maximize those devices’ potential while acknowledging the new limitations they impose. That’s what the software team at ELEKS concluded after evaluating Google Glass – an experience that allowed them to abandon their expectations about head-mounted wearables, adapt user experiences to tiny screens, and forget about keyboards altogether. For many UX designers, Google Glass evokes visions of an Iron Man-like interface with numerous controls and augmented reality features. Our team at ELEKS, too, fell victim to these assumptions. It was only after designing and developing multiple applications for Google Glass that we began to truly understand its distinctive features – and how to work within its limitations. In particular, we came across numerous technical and contextual challenges that few in the UX space will have encountered before. As the market for Google Glass, and thus the market for compatible applications continues to expand, we feel it is of vital importance for UX designers to share their experiences creating applications for the device. It’s in this spirit that we’re sharing our own. Photo Credit: lawrencegs via Compfight cc Technological limitations We began playing with Glass in August of 2013. Since then, our team of designers, analysts and engineers has worked on seven related projects, ranging from business concepts to fully operational applications. Most of the projects catered to unique usage scenarios and provided an application from which clients can benefit, either by opening new opportunities or by optimizing business processes. First, we discovered that the predominant way to interact with Google Glass was via Mirror API, which showed text...