My Martian arrived yesterday. It’s beautiful. It connects to the phone via bluetooth, takes voice commands, displays incoming notifications, and provides microphone and speaker functionality for calls. The phone can stay in the pocket, in theory.
I say “Email Wife, Good Evening”. It hears “Dial 658442”. I quickly grab the phone and cancel the random call it just initiated. Better not keep the phone in the pocket then. Sometimes a voice command succeeds, but the success rate is significantly lower with the watch then when using voice input on the phone directly.
Notifications for incoming emails are meant to appear on the watch, driven by an app. It doesn’t work. This is happening to other users too. I speculate that the source of this problem has to do with Android device diversity. Or, given that the Android device diversity mess is a well-known issue, one could say, it’s an issue with the software development quality assurance choices made on this project. Is there slack to be cut here because this is “just a Kickstarter project ?”
The app problems can probably be fixed in time, but the microphone limitation -if it’s a hardware limitation- probably not. The mic is claimed to be noise-canceling. Maybe my accent needs to shift from mid western european to mid western, although the phone alone deals with it just fine.
It’s entertaining. And when it does work, it’s magical. I am just glad that my own paycheck provider’s logo is not on this particular experience.
I first learned about voice input when I visited the University of Trier during my last year in High School, many many years ago. The tiny, esoteric “Linguistic Data Processing” program there had been my backup plan in case of getting rejected by the design school that I had applied to. I got accepted into design school and didn’t look back, but the business of turning a Fourier transform table into a sequence of meaningful letters remains impressive to me.
Voice input is going to be appearing in a lot of products, since Siri has set expectations. The trouble of course will be that the quality of Siri will not be matched by many me-toos, since it is dependent on natural language processing and on a knowledge engine, Wolfram, not just on “simple” speech recognition that can be had off the shelf… Progress will be made, and incredibly quickly too, but slower then expected by many a product manager.
A non-progress-dependent question though is, when does one actually want to use voice ?
Voice input has its proven place for hands-full situations, and it can out-convenience handset-keyboard pecking. But, given a choice, communication channel choices are often made to prefer the thinner channel choice to the richer channel choice: a text instead of a call, an instant message instead of a mail, a tweet instead of a status update. Why go back to rich voice, full of intonation, emotion, intent ? There also is the tap as the established “killer app” for device input – unbeatable in simplicity.
The answer / opportunity may lie in the disappearing computer: as the built environment becomes smarter, there just won’t be anything to tap, type, or click on. Just say what you want.