The Major Shortcoming of Google Glass

So far I've only written about the opportunities for Glass. Everything I've written has been overwhelmingly positive. But Google Glass has many shortcomings.

The greatest shortcoming is the lack of a significant UI that the user can actively engage with. The point of Glass is to make the technology go away, not put it in your face (no pun intended). Unlike PCs, tablets, and smartphones, Glass is intended to be as close to invisible as possible. Thus, it's much more difficult to provide complex user input signals.

For the 1.0 launch, Glass will recognize the following input methods. It is unclear if and how developers will have access to some of the input methods, but the hardware can recognize:

1. Taps/swipes on the side of Glass

2. Audio/Voice

3. Image/Picture

4. Video

5. Accelerometer

Unfortunately, none of these input mechanisms are particularly useful for detailed, powerful user interaction. Voice does provide some granularity, but the more complicated the voice command, the more NLP required. NLP technologies are still in their infancy.

Google has said that there will be iOs and Android SDKs to go along with Glass. Based on my conversations with Google's employees yesterday at SXSW Interactive, I don't think these SDKs will be ready to go on day 1. Thus, people won't be able to use smartphones as a remote control for Glass on day 1. Given that Glass already accepts taps and swipes on the side panel, it would be nice to accept those same inputs via smartphone as a complementary UI mechanism.

Perhaps the most obvious form of granular control with Glass is pointing with the human finger. This would require the camera to be on, significant processing power, and sophisticated video/image-recognition algorithms. I sincerely doubt Google will have these APIs ready to go on day 1, if ever. Perhaps 3rd party developers will develop such algorithms and make them available to other developers via APIs.

I think Google knows the UI opportunities of Glass are limited. And they also see a proliferation in new UI mechanisms - see the Leap Motion Controller and the MYO Armband as examples. I find the MYO Armband + Glass to be a phenomenally compelling combination. Unlike the Leap Motion Controller, the MYO Armband maintains one of the key characteristics of Glass - hands free. You could initiate or stop the camera by simply rubbing your fingers together correctly. You could snap a picture anytime you clap your hands or snap your fingers. You initiate a live stream without even taking your hand out of your pocket. You could manipulate the camera imagery by waving your hands and fingers in the air.