http://martianwatches.com

My Martian arrived yesterday. It’s beautiful. It connects to the phone via bluetooth, takes voice commands, displays incoming notifications, and provides microphone and speaker functionality for calls. The phone can stay in the pocket, in theory.

Martian Backside

I say “Email Wife, Good Evening”. It hears “Dial 658442”. I quickly grab the phone and cancel the random call it just initiated. Better not keep the phone in the pocket then. Sometimes a voice command succeeds, but the success rate is significantly lower with the watch then when using voice input on the phone directly.

Notifications for incoming emails are meant to appear on the watch, driven by an app. It doesn’t work. This is happening to other users too. I speculate that the source of this problem has to do with Android device diversity. Or, given that the Android device diversity mess is a well-known issue, one could say, it’s an issue with the software development quality assurance choices made on this project. Is there slack to be cut here because this is “just a Kickstarter project ?”

The app problems can probably be fixed in time, but the microphone limitation -if it’s a hardware limitation- probably not. The mic is claimed to be noise-canceling. Maybe my accent needs to shift from mid western european to mid western, although the phone alone deals with it just fine.

It’s entertaining. And when it does work, it’s magical. I am just glad that my own paycheck provider’s logo is not on this particular experience.

I first learned about voice input when I visited the University of Trier during my last year in High School, many many years ago. The tiny, esoteric “Linguistic Data Processing” program there had been my backup plan in case of getting rejected by the design school that I had applied to. I got accepted into design school and didn’t look back, but the business of turning a Fourier transform table into a sequence of meaningful letters remains impressive to me.

Voice input is going to be appearing in a lot of products, since Siri has set expectations. The trouble of course will be that the quality of Siri will not be matched by many me-toos, since it is dependent on natural language processing and on a knowledge engine, Wolfram, not just on “simple” speech recognition that can be had off the shelf… Progress will be made, and incredibly quickly too, but slower then expected by many a product manager.

A non-progress-dependent question though is, when does one actually want to use voice ?

Voice input has its proven place for hands-full situations, and it can out-convenience handset-keyboard pecking. But, given a choice, communication channel choices are often made to prefer the thinner channel choice to the richer channel choice: a text instead of a call, an instant message instead of a mail, a tweet instead of a status update. Why go back to rich voice, full of intonation, emotion, intent ? There also is the tap as the established “killer app” for device input – unbeatable in simplicity.

The answer / opportunity may lie in the disappearing computer: as the built environment becomes smarter, there just won’t be anything to tap, type, or click on. Just say what you want.

 

Watching the Ubuntu Phone Video

Quick look: The UI looks clean, with a touch of typography and panning from you-know-where, a lot of Ubuntu “everything everywhere”  integration, a decent app model, fresh takes on basic interactions. No stupid tiles, no ugly chrome, good flatness, no skeumorphism makes for a happy designer, or rather design-judgment-dealer-outer.

Based on past Ubuntu incarnations on the PC, one can’t help but wonder how far under the surface one has to dig until this all falls apart. There is some room for optimism, since mobile development community generally brings more design sense to the table then the traditional desktop open source development community does. So, will economics compel established mobile developers to branch out to this platform ? (Maybe because they aren’t making any profits in the established platforms either… not a great motivator, but a real one.) Or is this platform the inroad-to-mobile for open source ugliness ?

 

The Ubuntu phone looks compelling in that it takes the shedding of desktop tropes further then the alternatives do. It’s good to see another incarnation of swipe-ology as a touch/mobile native interaction paradigm. There are many new details that offer places for new devils to hide, but at least some old ones get banned.

If I look at non-digital-natives struggling with the very basics of their iPhones and Androids though, I wonder if this step forward widens the digital moat, via departure from conventions and upping the complexity of of the mental model required. Or will the better experience nativity ultimately contribute to bridging the divide instead, lowering the barrier for users with minds unspoiled by desktop ballast ? This comes down to the UI design zero-sum game: more appropriate representations – better design – enable more understanding of complexity, but at the same time, the complexity of the systems represented raises.

The battleground for user acceptance is in emerging markets, not in the US / Europe, so we’ll watch from a distance. User’s actual preferences are of course only a small part in the acceptance of an alternative OS. The phone is still a utilitarian, shrink-wrapped black box for most users, just as carriersaurs want it. In this situation, the user experience is more of a potential motivator for decision makers to give the platform a shot with users. The platform’s provider-oriented economics around installation cost and app store control are probably more important factors, but the experience is still an enormous success motivator – its intuitive, tangible qualities act in a counterintuitively intangible way.

 

Second version of this video, unstabilized, after stabilization had put some nasty jitter into the first version.

The not exactly minimalist video is based on footage taken at sunset, with the Hoverthings HT-FPV quadcopter. This copter’s size is just right for the close-to-buildings flying. My larger, more powerful Arducopter can deliver the same quality, but wants a bit more space around it.

The flight is line of sight, not FPV. I added the impressionistic video filter in post, since It played well with the sunset colors, and the raw video was already a bit too dark to read well. There also is an audio filter on the copter motor sound, adding distortion that makes the copter sound larger. The original sound is much more tinny.

Here is a design related post for a change. Windows 8, with it’s “Modern” (“Metro”) user interface is being hotly discussed. Fault is being found in causing pain and frustration to users who have to relearn basic interactions. Such fault-finding is dismissed by others as “old thinking”. At the same time, a relevant 2011 piece by Mike Kruzeniski, who had a hand in the creation of the Windows 8 UX, is circulated.

Here are some thoughts on the topic.

These thoughts are incomplete for sure, as they are short and don’t address:

  • the non-modern/metro-gripes voiced against win8,
  • the roots of new canonical UI language elements found in win8,
  • topics around user expectation management and context setting that must be part of UI discussion today.

…and so on, but here goes.

 

The current Windows 8 debate, in the design community and its surroundings, highlights the frustration that users experience when forced to use a new UI. This frustration leads some to judge that Windows 8 is bad. This is justifiable from a perspective of pain minimization. However, there also is a perspective that the abandoning of old desktop conventions that still permeate the other available platforms, is valuable.

Is it valuable enough to justify the price of user frustration  ?

Similar frustrations happen on other platforms too – they are not exclusive to Windows 8, although they get more attention here since Microsoft has always stood for backwards compatibility and continuity. Examples on other platforms are the problems that casual users experience around understanding cloud syncing, backup status, app store purchase confirmation -portrayed as simple, while complex in nature -, the various zoom toys in OSX, the incarnations of never-discovered menu buttons in Android, and the sea of cute, illegible icons in iOS that blend core functionality and peripheral one-trick ponies into a big, sparkly-but-murky soup. In other words, the leading, paradigm-setting OS user experiences, pre-Windows 8, already chafe at many ends.

None of this friction happens to expert users – the designer living her life in OSX knows her way around in her sleep, as does the architect living his life in Windows 8 since the early release previews.

But non-expert users struggle: our parents, and non-“computer people” of any age, with various levels of digital literacy, anywhere… including the majority of non-expert users coming online in the developing world, as we speak. The traditional usability battle is for these users’ hearts and minds: provide an environment that allows a successful experience, even in light of various levels of absence of an understanding of the underlying systems and concepts. A basic usability solution approach is familiarity: present UIs that use familiar mechanisms and conventions, in familiar layouts. The familiarity here can draw upon a digital canon of expressions, such as raised appearance for clickable elements, and a cut-out appearance for editable fields. Skeumorphism is another source of warm, welcoming familiarity, when employed for this purpose.

Familiarity, employed through the ages, brings us to an absurd situation: tablets, mobile devices, and really any kind of modern computation form factor, relying on ancient, “paper form on a wooden desktop in a 1950 office”-based conventions. Those conventions do a great deal of proven good, providing the mental crutches that make the difference between task completion and –failure for many non-expert users.

But they also fall short. Today’s experiences rely on constructs like network connections and account privileges pointing into forever evolving collections of stuff, pushed around through many similar but different communication and access methods. In other words, on abstract, complex concepts. The role of the UI has thus silently shifted, to providing anchors – identifiable “things” that users can shape their thoughts around – for abstractions of non-reducible complexity. The old approach fails: “it’s simple: it’s just yet another mix of familiar elements” – because it’s not. Hiding complexity, if it is non-reducible, is not a solution.

The tension is essentially between old words, in the designers’ toolbox, that can’t explain new things. Yet, design still promises to make things easy or even delightful. Design’s audience, the user (and also the paying client), has a matching expectation: if you make me use this, it better be delightful, or at least easy.

But the tremendous pool of possibilities and power that are unlocked through user experiences today require a new understanding: as designers, we need to look for ways to provide the needed anchors that make invisible stuff visible. Users will welcome a UI evolution that provides them with meaningful insights into powerful functionality. Users will feel relieved to not have to look between the UI elements to figure out how things actually work. Too get back to Windows 8, the break with familiarity is still frustrating, but this frustration will be overcome once users inevitably figure out how the system works.

Could more help be provided, to make this an easier transition ? Could some familiar elements have been preserved from the existing canon ? Possibly yes and yes. On the other side of the transition is an experience that offers design opportunities for new solutions that are not tied down in no-longer-relevant, no-longer-accurate ancient metaphors. It is up to the design community to make use of these opportunities.

Here, the 2011 Mike Kruzeniski piece comes in. The content-centricity that he emphasizes is a common trait of user experiences today, and I agree that print design’s craft, with grids, whitespace and typography, has a strong place in replacing the “old UI furniture” canon with a more appropriate contemporary canon.

I would argue though that content centricity, where print-use shines most, is the easy part to solve. Finding a new normal, where complex, invisible interdependencies and connections are readily understood, is the harder challenge. Here, the print design toolbox again should get more play then it traditionally does, but is also inadequate on its own. Just simplifying UIs, replacing chrome-heavy buttons with simple typography, replacing barriers and levels with whitespace and sizing, is beautiful, but may not help the struggling user to fulfill a task. The UI must still provide differentiation, and provide the proper anchors for the abstract. The way to approach this challenge, I think, is with the design toolbox in hand, including print design, but to consider content and functionality as different problem domains that share space on the screen – so, instead of making everything look more content-y, or generally giving content more emphasis, we can find new ways to differentiate. Abandoning some familiar elements, at the well-understood known price of this action, is part of that.

 

Honoria and myself just got back from Houston where we attended the 100 Year Starship Symposium. This event was hosted by Mae Jemison’s Dorothy Jemison Foundation. The foundation has received a DARPA grant to organize people who are interested in enabling the technology that is needed to create the possibility of building a starship – an interstellar vessel, possibly carrying humans – within a timeframe of a hundred years.

Walking into the symposium cold, with no applicable knowledge, and then sitting down to learn about concepts for FTL drives, the math and the associated energy requirements, made an interesting few days. The FTL drive is important because traveling at fractions of lightspeed is just a bit slow. It works by shrinking spacetime in front of the craft and expanding it behind. That takes a lot of fuel. To not carry all that fuel with you, generate it by heating vacuum in space with lasers, to create particles from background noise. That takes big lasers, and so on.

As a whole, interstellar travel is characterized by requiring solutions that appear to be theoretically (string-theoretically) possible, give or take a few miracles, but currently out of reach by a few orders of magnitude when compared to currently available technology and knowledge. But then again, our lives today are full of technologies that did meet this exact description fifty or a hundred years ago.

Besides technology and math, there also are a lot of questions of social nature – how many people would make a good crew, what should they take with them, what will happen if somebody gets mad in the canteen when the craft is three lightyears away, what will the children of the travelers think about earth, or about planets in general, if they have never been close to one for their entire life.

Software ?

Myself, I wonder how the travelers will write software when they need some. And they will need software for everything.

Should the programming spacefarer still be exposed to the memory leaking pointer mess that today’s compilers serve up, with their computer science concepts from the 1960s ? Or will functional programming and its conceptual children finally find an inroad to the software industry, with a starship as a usage environment as a catalyzer ? This would be a great near-term economic engine for a long-term goal, improving software everywhere on earth well before the ship is built.

Will a good user experience still require weeks of mucking about by a designer / artist, followed by more weeks of mucking about by the rare developer who is actually interested in making the “good” parts of the solution happen, by tediously wiring voluminous, dull user interface code and state management into their life-support-system business logic ? Or will software finally fulfill the modularity promise that object oriented programming has failed to deliver on ? To get there, we should envision UX-less program logic, self-describing its inputs, outputs and state needs, to which smart UX components can attach themselves to provide a realtime solution that is optimized for the platform, the user’s proficiency, the user’s preferences, and the exact information that needs to be communicated. There’s another short term engine in this idea, which would free bored programmers from having to fiddle with UX, and allow device manufacturers, in the short term, to compete on the intelligence, beauty, and flexibility of their devices’ user experiences, instead of dumb-skinning the robot or the fruit as it is done today.

Or maybe the second generation of starship kids just use their optical-nerve integrated command line anyways – but I don’t think so (and if they do, more power to them.) At the symposium, the old-school-singularity consciousness-uploading and bloodcell-replacing nanobots did not make a major appearance. The social discourse was just that: grounded social discourse.

A nice moment at the event was sitting at the dinner table with Jill Tarter, the director of SETI for over three decades, while a clip from Contact was shown on the video screen, showing Jodie Foster as a fictional Jill.

Also:

The physics and math presented in the panels, well beyond my reach, were impressive, but I have a sense that I know what I don’t know in that area. Truly alien was hearing speakers about public policy.

All the Nasa folks are trekkies – no surprise there. Star Wars was not mentioned. The Parsec is a unit of distance, not speed, after all.

Here is a piece by Clara Moskowitz on Space.com.

This Sunday at 5pm, after preparing all weekend long, we did the first set of FPV flights. Luckily, this was also the first weekend that could be described as offering fall weather in Austin, with pleasant temperatures. The flights were successful, despite some wind that was beyond the airframe’s comfort zone.

The airframe is an old EasyStar electric glider, which is not particularly performant, but very reliable, predictable and forgiving. It is propelled by an old Zagi brushless power system, offering more then enough thrust, even with all the extra FPV payload.

The FPV equipment is some of the better stuff available, mostly bought gently used, relying on brand names that use absence of subtlety to convey amazing technological offerings. The video frequency is 5.8gHz, to work with the 2.4gHz radio system. Choosing a video frequency is a bit of a religious decision.

Long-range FPV fliers and purists advise against 2.4gHz radio usage, because the digital nature of the 2.4gHz system offers little predictability of gradual signal loss, but for our not-long-range-at-all-so-far-flying, it’s just fine, especially with DSM2 and dual receivers.

The FPV system was using a Team Black Sheep camera and Core (power supply and on-screen-display), components that are described as operating with low noise. Electrical noise is the enemy of range. The video transmitter is a proven ImmersionRC 600mw unit, and it talks to a ground station with an ImmersionRC diversity receiver. The diversity receiver actually consists of two receivers, and always switches to the one that offers the stronger video signal, between video frames. The two receivers are hooked up to two different antennas: a low-gain circular-polarized one, matching the transmitter’s, and a higher-gain linear-polarized one.

Circular-polarized antennas offer less overall theoretical range then linear-polarized ones, but they reject multipath interference. Multipath interference occurs when the 5.8gHz signal, with its crisp, bouncy short waves, gets reflected by buildings and other structures. The reflected signal, when picked up by the receiving antenna, messes up the video signal. Since a circular-polarized signal changes direction (clockwise-counterclockwise) when reflected, it is mostly ignored by the circular-polarized receiver antenna.

Linear-polarized antennas are prone to multipathing. The linear antennas that I have available offer some higher gain, though, meaning they can pick up a signal from further away… but only of the signal source is within the “beam” of the antenna, which is, for example, a sixty-degree cone. Outside, no signal: higher gain does not mean that an antenna is “stronger”, it just means that it focuses its attention into a specific area. The smaller the area, the better the reception in that area.

By having these two different receiver antenna characteristics available on a diversity setup, the system gets a lot of redundancy.

The video, once received by the ground station, is fed into a set of video goggles – Fatshark Dominator goggles. The goggles are mildly uncomfortable, but bearable. With glass optics, they offer a decent picture quality, compared to plastic optics on lower-end goggles.

Supporting the goggled pilot, a spotter keeps track of the real world, and provides guidance. I am lucky to have a wife who is actually interested in helping out with this.

When flying, the in-goggle live video from the plane is very immersive. I found myself tilting my head to try correcting the plane’s attitude when it got blown around by high winds. The in-plane perspective looks much higher up in the air then the on-ground perspective, which is cool but creates the risk of unintended ground contact. Seen from in-plane, the world also looks very different then from the ground. This is a trivial observation, but in practice it means that orientation is challenging even in very familiar surroundings. In other words, it’s great fun.

ImageImageImageImage