Google Glass is the newest advance in wearable technology. It comes in the shape of a pair of glasses that allow users to carry the power of a smartphone around with them and receive information in the blink of an eye.
Google first showed the world what a life with Glass could look like with this initial concept video on 4 April 2012. It was a future that excited many people – including developers – around the world, and Google has been working hard over these last two years to make that a reality.
Several weeks ago, on 15 April 2014, Google released a limited number of Glasses to the general public in the USA (still as a prototype device). U1 Group hurriedly snapped up a pair and shipped the prized package to Australia, quick smart. Brimming with excitement, our team of consultants have been testing the device over the last few days to better understand what the user experience of Google Glass is really like.
In this article, we do not just talk about the invasion of privacy, how fast digital is moving, or where the future of digital interaction may be headed. Rather, we aim to give you a close, practical look at how Google Glass works and feels on a day-to-day basis.
For those interested in a critical analysis of Google Glass from a professional UX perspective – as well as what issues the technology beholds – we invite you to read on.
The ‘out of the box’ customer experience (CX)
The packaging and peripherals that accompany the Glass are beautiful, simplistic and scream quality the moment you look at them. These are all things Google has been working hard to instil aesthetically across all of its products for a consistent heightened consumer experience (CX).
The glasses themselves are lightweight and comfortable to wear. On putting the glasses on, you are presented with a short introduction and interactive lessons on how to use the device. You are then taken through a step-by-step process that shows you how to pair the Glasses to your phone, and subsequently connect to your choice of app or website interface for managing the Glass.
This combination of visual direction and interactivity throughout the set up process makes this alien technology seem instantly user-friendly and intuitive. However, some areas are still lacking.
The set-up process for emails and contacts is unclear, along with how to add additional apps. For functionality that goes beyond the bare basics, you seem to be simply directed to the Glass app or website to figure out the remainder of the set-up process yourself.
To see how the basics are supposed to work, check out Google’s video ‘How to get started’ using Glass.
User interface (UI) and interaction
The interface is made up of what Google calls ‘cards’. These cards display text, video, pictures or a combination of information depending on the purpose of the card. These cards are rectangular, taking up the entire screen allowing you to see only one card at a time. Each card is placed next to one another horizontally in a line, like a ribbon of content that you can navigate through from side to side.
Navigating between cards
There is a home screen in the middle of all of this which simply has the time displayed and the words “ok glass”. To the left of the home screen is content that is current or recurrent such as events, weather and settings, while the right shows any past actions taken by the Glass (e.g. photos recently taken or content you have viewed).
The interaction with the device is simplistic and intuitive for the most part:
- tapping the touchpad on the right side of the device will select an item or perform an action
- swiping your finger forward will pan the display view to the right along the row of cards
- swiping backwards will pan you to the left of your interface, and
- swiping down will exit out of the menu or area of interface you were viewing.
The most difficult of these actions to understand is tapping, as the nature of the interaction is dependent on the context. Sometimes tapping will present you with more options or menu items, while other times tapping will commence an action or select an item. These interactions are not always clear and can lead to some unwanted outcomes.
The touchpad on the side of Glass is, in most cases, fast and responsive. This can be a pain: it frustratingly picks up small bumps or accidental movements. And although the surface is very sensitive, it is inconsistent. It sometimes struggles to register a downward swipe, or misinterprets the action for a tap.
You are alerted of actions that have taken effect via sounds in your ear. A tap or scrolling onto another screen plays a click. Closing out of something plays a ‘swoosh’ sound to reinforce what is happening on the device. These interactions help you navigate the device.
To create content on Glass, you need to use voice commands that are triggered after you utter the phrase “ok glass”.
Today, we live in the age of Siri, Google Now and Cortana – virtual personal assistants on all major mobile phones that understand your questions and deliver results, or interact with other applications on your behalf. Because of these benchmarks already in the marketplace, the voice recognition on Glass was so disappointing.
It’s hard to describe how frustrating it is to repeat yourself multiple times, only to have Glass accurately understand 80% of your message and provide no simple way of removing problem words, forcing you to start over.
This was exacerbated by occasional delays between saying “ok glass… Google” and the device triggering the recognition process, causing it to only hear the end of your request or miss the window of time it allows for you to speak.
While Glass generally understood what was said, there were numerous times it struggled. Less frequent words and asking for directions proved to be difficult. This is a major flaw given that voice recognition is the only way of interacting with some of the main functionalities of the device.
Another area in which Glass’ voice recognition really disappointed was its integration into the operating system. Aside from the home screen and a handful of Google apps, there was no voice recognition when you said the magic words “ok glass”. What’s worse is that when you did interact with its voice recognition capabilities, you had to speak the exact words given by Glass to make a command. For example, instead of saying “ok glass, call…”, which would seem intuitive for most users, you must use the exact phrase “ok glass, make a call to…”.
While Glass does present you with all of these commands once you have triggered its voice recognition function, it becomes increasingly difficult to scroll through the specific command you require as your list of commands grows with every app you install.
Living with Glass
While going about your daily tasks and when the screen is inactive, it is quite easy to ignore Glass – it is positioned almost out of your field of view. Notifications do not open on screen without warning, but instead a small chime is played into your ear to let you know a new item is available to view. Simply looking up 30 degrees with your head (angle can be adjusted) or tapping the side of the device will show you the related content. These can also be viewed later by simply scrolling to the right of the home screen.
It makes a lot of sense for the user that notifications are not presented on screen as they occur – they could be distracting or annoying. But I can foresee the audio indicators in your ear becoming quite irritating if you have multiple services updating throughout the day with no way to manage their frequency or what content gets through.
Simple modes like ‘silent’ and ‘vibrate only’ are standard settings that should be present in a device of this nature. Better yet, users should be allowed to dictate quiet hours of use or implement geo-fencing to activate a silent mode at work. Other basic customisations that need to be included are the ability to organise your cards and pin your favourites in place, and change the default time before the screen turns off.
Getting out and about
The device gives the illusion that it is less distracting than a conventional screen because you can still partly see the world around you, out in front, rather than staring at your mobile screen and feet. But make no mistake: Glass requires your full attention.
The Google Glass screen is positioned just out of your line of sight, requiring you to actively move your gaze to the screen to have any functionality. This is no better or worse than current alternate options, but could lead to problems if people intend to use this for navigation while driving.
Navigation with Glass was great. Simple maps were presented as needed with turn-by-turn instructions spoken into your ear. You are also given the option of a route overview if more information is required.
However, the real downfall in the experience was the device itself – as it was unable to deal with the glare of the sun, which made it impossible to read the screen. This was an issue that reappeared in general outside usage as well.
Battery life was unfortunately woeful with Glass offering only around five hours with light to moderate use. This simply wouldn’t be enough to satisfy basic usage scenarios for most people, especially with a wearable device where the expectation is it will be with you all day. Overheating also appeared to be a common issue. It took very little for Glass to get warm to the touch as it was pressed up against your temple. It was quick to notify you of these circumstances by displaying “Low battery. Connect charger” or “Glass must cool down to run smoothly” on your home screen when required.
These are teething issues, of course, as the Glass currently in circulation is still a development device with many bugs in its software. But the importance of these factors could make or break the technology in the consumer market.
Not living up to expectations
After spending a few days with Glass, I found that the device was not living up to my hopes or expectations. It was supposed to be a hands-free device that would put the technology there when I needed it, but be out of the way when I didn’t. The more I used Glass, the more I realised it was not a solely hands-free device; users have to interact with the touchpad for any real functionality to occur.
When interacting with applications, the experience felt crippled compared to my phone, which raised the question: “What can Glass offer that my phone doesn’t already do for me, with greater detail and at a cheaper price point?”
So what does Glass offer that smartphones can’t?
As the screen on Glass is so small, there is a limited amount of text or content that can be shown. If any further information is required, the interaction cost with the device rises. This is the reason why Glass works best with audio input and audio visual output, otherwise the interaction cost is just as high as using a mobile phone (which provides far superior functionality). It is in this area that applications and Glass need to push themselves and excel to form some sort of relevancy.
Glass’ real benefit over smartphones is the sheer positioning of it on the body, always being available for the fast retrieval of information. This positioning will be one of the key driving points for Glass: any technology like augmented reality, face recognition or video capture that benefits from a user point-of-view position will be greatly enhanced through the Glass.
Applications like Word Lens (the automatic visual language translator) are currently available on smartphones, but using the same applications on Glass make processing certain tasks so much easier. For example, being able to quickly have your view of the world translated into your native tongue instantly is incredibly powerful (although not always accurate).
This fast hands-free approach is what I wanted from Glass, but it is sadly not quite there yet. However the future looks bright. Google needs to work hard to simplify content and queries into their bite-sized cards (like they have done for weather and stocks), to ensure users can get the information they need without having to try and navigate a webpage on the tiny screen.
Webpage interaction as it currently stands is horrible and clunky, making it almost impossible to find an answer to your query unless it is covered in the short page description that Google provides you with before entering.
Glass should fully utilise its benefit as an ‘always-on-the-body’ technology in the way that other wearable devices have done, and seen great success. Steps taken, heart rate, daily habits or other meaningful pieces of information that can’t be tracked by a smartphone are examples of what Glass could leverage in the future.
Aiding people in their daily life will have the largest effect for users with disabilities. The ability for technology to identify objects for a blind user, transcribe audio for the deaf or even magnify objects for the vision-impaired would be life changing for many. Sadly, many of these usage scenarios are for very niche markets or very specific purposes and types of applications.
Google Glass: Is the world ready?
There is no denying the hype around the device from tech enthusiasts, but will average consumers feel the same way towards the device? There have already been signs of animosity towards the device in San Francisco, where reported Glass-related muggings or assaults have occurred in a number of instances.
The critical factor in most of these cases is Glass’ camera and how people are using the technology. The main concern is regarding what is being filmed or photographed, and when, with no consent needed or warning given. This has the potential to be a very large security risk for many businesses and situations, not to mention invading people’s personal privacy on a new level.
While mobile phones saw some of the same issues raised when cameras were first incorporated into the devices, Glass has a far more inconspicuous method of capturing the word around it. Will consumers be able to get over this social hurdle, or will this hostility to the idea make Glass dead on arrival?
The boundaries of social acceptance may also be pushed by the way in which users interact with Glass. Google has even released its own guidelines for Glass etiquette, to help users avoid being labelled a ‘Glasshole’!
So much of one’s personality is transferred through the eyes, and interacting with Glass during a conversation gives the appearance that you have become soulless. It is an unfamiliar look, and unsettling to be around.
Wearing Glass in public is a socially nerve-racking experience – never has something made me feel so self-conscious. The stress this presents is horrible. Plus, the design of Glass is so striking that it only serves to feed the feeling that everyone is looking at you.
(In Google’s defence, this is still not a mainstream product. It is a first of its kind, which causes people to be naturally curious. Google are currently working with partners such as Ray-Ban to develop more conventional looking eyewear to reduce this social leap that Glass users need to be prepared for when in public.)
Nonetheless, the stress of the experience led me to change the frames to the alternative you are provided with (we were given two frames: one standard, and one designed for users with prescription lenses). The more conventional design helped in some ways, but it was still quite evident that I was wearing Glass, especially when the screen was active and displaying a light in front of my eye (like a scene straight out of ‘Terminator’).
I believe another prominent danger for Glass technology is the emergence of other wearable devices like smartwatches. These devices are far less of a social leap for consumers, yet provide similar functionality in a number of fields – and are offered at a smaller price point.
Despite the challenges that Glass presents, there are still so many applications or situations in which Glass works so well. And there is a clear demand for it in the marketplace (take the growth of ‘point-of-view’ video capture devices like GoPro, for example).
The real UX challenge for Google is to deliver a product that manages to balance all of the below:
- A user friendly and intuitive interface
- An exciting and immersive experience
- Being usable without being overwhelming
- Addressing the social stigma of wearable technology
- Hardware that supports the functionality of the software
- Battery life that reflects the nature of the wearable device
- Speaking the user’s language, not enforcing interaction styles
- Plenty of assistance because of the new nature of the technology
- Ensuring all touch points with the consumer are positive interactions
- Ability to work for the user, providing customisation and personalisation
- Useful functionality that is comparable (or better than) the interaction cost of pulling out your mobile phone.
Failure to deliver on these items will affect the usability of Glass, and in turn affect adoption rates among consumers. If Google can’t meet these points, then it no longer becomes a usable device but instead a device that users have to work around.
Changes will be made and the device will become more functional or relevant over time, but remember that this was a device that Google originally hoped to release at the end of last year. It is clear that Glass still has a while to go before it keeps its promise of changing your lifestyle and giving you a truly unique user experience.
At U1 Group, we will continue to test and apply Google Glass to other situations to better understand the possible UX benefits and drawbacks that this new technology has to offer.
We welcome your suggestions on what to test, and would love your feedback. Please post your comments for discussion on our LinkedIn page. Simply click this text and select the Google Glass article. We’ll do our best to address all Glass queries (or requests!) for you.
For updates, please sign up to our newsletter by entering your email address in the footer below.