“Greetings from Austin”

I am a German designer and Texan by choice.

On this site, I occasionally publish blog posts, which you can find below and in the menu on the right. Learn more about my work as a designer and take a look at drone imaging if interested.

When is design worth doing? How do I make it worth doing? I find myself looking for a way to express my perspective on the value of design for building products.

When building a product, design exists at the intersection of its contexts. Those contexts provide opportunities and constraints. Effective design is in close contact with the contexts. Design integrates inputs from its contexts in a unified expression. By providing solutions to the tensions and conflicts that exist in the variety of inputs, the design expression is “more than the sum of its parts.” That is the value of design.

In the real world, the contextual grounding is often off: one or another context is usually over- or under-represented. Some projects chase a product vision that is barely verified with users. Others create beautiful, well-researched features, loved by users, that don’t provide business value. Often the product strategy and user are well served, but nobody checked with the development team until it’s too late.

Examining the quality of contextual grounding of a design project provides a way to assess the health of the project.

As a potential (or actual) project contributor, I can look at the grounding to judge: “is this worth doing?”

As a design leader, my imperative becomes to get contextual grounding right, by having the right people on board to represent the contexts, and by devoting resources and time to capture input and feedback from them.

Design can succeed without contextual grounding when constraints are few and resources are plentiful: when it’s easy. But that is rarely the case.

I am now a part 107 certified commercial drone pilot.

Once, I built an Arducopter, an early open source drone. The Arducopter was an awesome machine, and, at the time, a flying bag of software bugs. In the years since, I have flown a range of different drones, built some fixed-wing mapping drones, participated in FPV racing, and made aerial videos of deserts and mountains. I did some agricultural drone mapping, tracking plant health. Along the way, I founded Graf Systems, a drone-related software startup but the time wasn’t right.

Making use of COVID downtime, I recently got an FAA Part 107 Drone Certification, which makes me a commercial drone pilot. Hopefully this will allow me to help others benefit from aerial imaging. For details on mapping and other imaging options, go to the Drone Imaging Page.

Drone Mapping

Besides discovering landscapes from a bird’s perspective, the use cases for aerial imaging I find most compelling are related to mapping: flying a drone to collect a lot of images of a piece of land, and then generating 2D and 3D data products that provide views and insights that are otherwise elusive.

Orthomap of a 3-acre property

We are ground-dwellers. Our perception of space is horizontal, and it’s based on seeing vertical markers: walls, trees, objects. By looking around, we get a good intuitive understanding of small spaces: a room, a back yard, a parking lot. Spatial relationships on larger scales are hard to understand: when walking through a park, driving on familiar streets, dwelling on a site of a few acres in size, we often have a functional, but actually quite vague understanding of spatial relationships of our surroundings. When looking out of an airplane window, seeing our city from above on final approach, familiar places look alien and orientation is hard.

Making a drone map provides a new view of an area: it fills in the what-is-where-and-how-big blanks about the area with rich visuals. We catch up with the bird’s understanding, overcoming the usual looking-at-the-walls distortion. This comes in handy when planning to do things in the area, when monitoring large changes in the area, or when looking for something.

Besides delivering something useful, I find that drone mapping also is bit magic: making structure from pictures. Algorithms pour over pixels, by the tens of millions, and decide where they are in space. I once hired an expert in Ukraine to explain to me how SLAM (Simultaneous Localization and Mapping) algorithms work. These algorithms are at the core of drone mapping and 3D scanning.

The video above illustrates the mapping process. I created it in one of the mapping software tools (by Pix4D) while making an orthomap of a rural 3-acre property (see the picture above.)

Drone imaging lifts up our terrestrial perspective,
providing unique value for many applications.

The aphorism “…Dancing about architecture” comes to mind as I’m trying to write about the role of space in user experiences.

Augmented reality has a long way to go, but the basics work well. I have used the Magic Leap ML1. Its integration of content with the environment is compelling. Placing content on the floor, on a table, or on a wall is believable. Occlusion works well. Digital content and the real world appear fused. To get a closer look at a piece of content, one just walks right up to it to literally “take a closer look” (within the limits afforded by a clipping plane close to the viewer.) So far, so good. Physical-world conventions integrate with digital content. I have seen many first-time users have immediate positive experiences.

Mechanical Stage Design by Joost Schmidt, 1925. 

The Bauhaus pioneered new forms, experiences, and design as a business.

(Public Domain Image, Source: Wikipedia)

Old Thoughts: What Could Space do for UX?

Switching gears… Consider virtual reality (VR.) It’s been around longer than AR. VR can be thought of as “AR’s older sibling who already has a driver’s license.”

In the past when I thought about virtual reality, I always considered there to be an opportunity to actually use space to present data, and to optimize the experience by using space in ways that transcend human “situation” in space and human locomotion. One could zoom by powers of ten, to massively change scale of content from everything-at-once to “a detail stretching to the horizon.” Grids of objects could naturally live in spatial arrangements, instead of in tables on a plane. Size, position offset and material qualities of objects cold come to signify common distinctions. Use of space could help with navigating vast amounts of data, but it could also help in simple situations like editing or tracing code, or performing common exploration, discovery and drill-down flows.

I think there is some value to be found, even for simple use cases like way finding, not just for data wrangling. When we deal with digital content, we often handle content on a “more than you can imagine” scale, and the content is often arranged in some sort of hierarchy. So there’s potential for doing that in space, harnessing human’s evolution-powered spatial perception capabilities. Why be confined by the ancient computing UI metaphor, the window?

The window as a UI metaphor was made for expensive, small computer screens in the 20th century. Heavy tubes of glass could show us a little rectangle of low-res content at at time. Resolution has gotten better and screens are now large, light and bright, but we still concern ourselves with tiny fields of view, go page by page and scroll for miles a day.

The Old Sneaks Into the New

In augmented reality, when we go beyond demos, a lot of experiences are going to handle the same kinds of content that we already encounter on a regular computer screen or phone screen. In AR, these screens replicate: we place a web browser on the wall, a document viewer on the table, a video screen on the other wall.

This works well in the ways outlined above: instant familiarity, great immersion. The approach minimizes the experience delta from computers and phones to AR, making all the new AR users comfortable. Almost every AR user is a new user.

But “screens in AR” are also worse than using a regular old screen. A real screen beats any AR device in terms of resolution, while being physically easier on the eyes and a lot more convenient too. We’ll need better conventions that are native to AR, to avoid limiting the sophistication and long-term viability of AR experiences.

So Let’s Make Better Use of Space

The convenience gap will narrow as we go from AR headsets to AR glasses, and with each generation, AR tech overall will become more acceptable. As this development unfolds, we have an opportunity to provide presentation- and interaction paradigms that make use of space itself.

New AR experience paradigms will unlock potential for new usability, delight and power that leaves “screens in AR” in the dust. More importantly, using space right could provide consumption and manipulation of content that is a superior alternative to using today’s computers and mobile devices, making AR competitive in the first place.

Getting there takes small steps. Today’s integrated AR experience of content-in-a-room is great for orientation, launching experiences, and simple consumption. No need to replace it with something harder or more abstract. With space at our disposal, we can use three dimensions, all around us, while preserving familiarity with existing digital experiences.

2D user interfaces use 2.5D affordances and hierarchical mental models. People understand and navigate abstract hierarchies, layers and levels all day long in computer- and mobile apps, by necessity. We can represent those structures with better or less abstraction, by arranging and shaping content in space. Experimentation will tell what people’s taste for new spatial metaphors and mechanics is, and what people’s stomach for such innovation is.

Break Conventions to Find New Conventions

Breaking normal spatial conventions, should be on the table too: changing the size of things, morphing things, giving up occlusion, creating virtual space that doesn’t actually integrate with the physical surroundings anymore …some of this could turn out to be helpful. I suspect that something like an “app-specific handling of space” could be used by people just fine, as long as conventions are established and followed, and as long as the transition moments and boundaries to other “regular” experiences are easily recognized. (…Enough dancing about architecture.)

Popular Culture Leads the Way

Movies and science fiction literature give us plenty of examples. Of course such examples are not constrained by the pesky realities of hardware and software constraints, but they hint at possibilities. Fiction has a track record of guiding actual progress. Neal Stephenson, one of the relevant authors in this context, recently was leading a team at Magic Leap.

Delivery Perspective

The AR companies, working with head gear hardware, with glasses, and with phone-based AR are building platforms: the infrastructure to run, create and deliver experiences. Most of the actual solutions, fulfilling use cases and providing business value, will be built by others: small and large software companies, the same people who build non-AR software today.

Driving Experience Innovation

Some external developers will stand out by creating ground-breaking new experiences that will later be absorbed into the platforms, pushing AR platforms’ conventions to evolve.

The majority of external developers will focus on optimizing the domain-specific capabilities of their products though, as they do on any platform. The experience conventions in AR will not be their focus of innovation. The UX will just have to work, based on what the platform offers.

It is the stewards of the platform who have to drive conventions forward. There is a balancing act between immediate familiarity to accommodate new users (every user) and finding new, expressive, productive use of space in the experience to deliver differentiating value that is impossible on other types of platforms. I think that a combination of simple, “literal” use of space and a more flexible use of space is part of the opportunity. The literal use of space provides the grounding, and the flexible approach unlocks the power.

While at Tethr, I encountered The Effortless Experience, an approach to customer relationships, developed by Matt Dixon and his colleagues. Its core thesis is “customers whose expectations [were] exceeded, are actually no more loyal than those whose expectations [were] simply met.”

To a designer, this can sound a bit bleak. After all, as designers, we want to make everyone’s experience as great as possible. But an effort-centric approach can be productive for design work, and for demonstrating design value to non-designers.

The Effortless Experience focuses on consumer experiences – people who spent money on goods or services and who are now interacting with the business or service provider, often through a call center, to make the product or service work right. The theory applies to the person with the broken vacuum cleaner looking for help, the person with the wrong wireless plan looking for help. These people either get the help they need without too much effort, leaving happy, or not. Since the devil is in the details of the personal interactions that shape the outcome of these situations, it is hard to track and optimize these outcomes from a business perspective.

As designers, we shape the user experience, which encompasses moments where interactions like the ones above happen, and their digital equivalents. We compose flows of screens, where we exercise close control of the experience. In service design, we shape customer journeys that include person-to-person interactions, taking a much wider perspective with less direct impact on the outcome.

User experience sits at the center of the relationship between the person using the product / service and the entity providing the product / service.

There are many types of designed experiences besides customer support. Thinking of my own experience, users of software development tools come to mind, as well as users of cloud infrastructure conducting business tasks, such as glancing insights from data. In these instances, the underlying customer relationship is less transactional and a more durable bond because this person is relying on the service, its features and details, to create something, to make a living. Yet, when this person encounters a problem, the problem needs solving. The durable bond then momentarily becomes transactional. How easy or hard is it to get help, to solve the problem? The range of expectations around easy vs. hard varies by domain and by relationship type, as do the means to provide the help.

Meeting the person’s expectation, so they can get through their problem, is key. According to Effort Theory, meeting expectations is all that is needed to keep the relationship intact. No need to impress them, wow them, or entertain them, just help them get it done.

When designing the transactional “Change your wireless plan” type experience, Effort Theory provides straightforward guidance (…which doesn’t mean it’s an easy design task). When designing a relationship-based experience, the relevance of Effort Theory is less obvious. There are still “help me through this” transactional moments to optimize. Some of these moments are in plain sight (sign in, OOBE), but many are hidden, not in the core flows, not occurring with the content envisioned during design. People forge their own paths through a feature set and use less-than-ideal data to make things work.

From a design perspective, we address this challenge in two ways. On the known unknowns path we try to foresee “help me through this” moments and we identify them through qualitative research (talking to users instead of sending them surveys, for example.) We then solve these moments, one by one.

We can also attempt an unknown unknowns approach, admitting our inability to predict everything. In this approach, we provide robust feature sets and flows that strive to always provide more than a single way to achieve a result. Instead of funneling a user into a moment of crisis, we provide them with solvable alternatives.

Taking a step back, the original Effort Theory aims to create relationships based on transactions. It is the marketer’s hope to have the customer come back to buy their next vacuum because they feel they have a “relationship” to the vacuum company. Not that there is anything wrong with that.

In many situations, the people and their service providers we design for are already in the middle of a relationship. “You provide technology to me that helps me generate income.” A little hardship may be tolerated in this context as long as the core promise of the relationship holds. On the other hand, each instance of hardship will likely be perceived as a threat, and not just as a mere inconvenience. The livelihood is at stake, not just the vacuum cleaner.

In practice, Effort Theory thinking can guide design towards avoiding, detecting and eliminating possible damaging moments of hardship. A case-by-case approach can be taken, but is best combined with a systemic approach.

Many organizations, unaware of Effort Theory, design this way anyway, with structural or architectural perspectives.

Effort-aware structural design is different from others takes on design. Sometimes design is focused on creating moments of delight, on harvesting low-hanging fruit, on the theoretical perspective of a persona, on some specific type of UI expression or on stakeholders’ pet features. Many approaches are valid in the right circumstance. Adding an effort aware perspective can raise the quality of the result and thus the design return on investment.

Targeting Effort with design work can also help non-designers understand the opportunity design affords, since the customer relationship is on everyone’s mind, but design is not.

Looking through archives, I found this screensaver I coded in 1999, using Macromedia Director. 3D modeling and rendering was done by Kevin McIntyre, design by Karl Kim and creative direction by Mark Rolston.

While we didn’t know it then, at the end of the 1990s downtown Austin was slowly ramping up its transformation into what it is today. Eventually, tall buildings began sprouting everywhere.

As a graduate student in the mid-1990s, I wrote Information Bodies, a concept around bodies in virtual reality.

VR, for the most part was text-based then (as rich as the user’s imagination), but some rare high-end systems could do graphical, immersive VR. William Gibson had ignited our imagination about this when he invented the term “cyberspace.” Social media did not exist.

Information bodies looks at what a body in a graphical VR could be: an assemblage of tools and functions that both represent the person using the body, and that provide the person with ways to work with data. The concept considers the space of VR as a workbench for data. People in the space are shaping and interpreting the data. People carry data (media) around with them that represents who they are.

Re-reading it today, I find Information Bodies offers a refreshingly dry perspective (even if do I say so myself.) Today’s corporate vision of VR and XR in contrast, is many things to many people, with a wide range of use cases. Running the wild west of mobile web on hardware that will track what you are looking at is a privacy concern. At the same time, there is incredible potential of computational goodness integrated with the real world.

I recently found the Information Bodies’ webpages in an archive and put them back online. The concept is showing its age and it stops short of actually offering any designs or concrete features. But, naive as it is, the ideal it expresses is still close to my heart.

The concept is based on an implied assumption: that working with data spatially is superior to working with data in “flatland” on a screen. Is that true?

In summer 2019, I joined a small developer tools design team at Argo. Our team operated in the context of a large design team working with the client Magic Leap.

Magic Leap shipped the ML1 mixed reality headset in 2019 and announced the ML2 for 2021. The company is building out a platform that is aspiring to be an ecosystem.

I have been contributing to a project called The Lab, which is a meta tool bringing together a diverse set of existing developer tools in a modular fashion, providing workflow, installation management, along with device management and simulation features. The Lab makes development and testing workflow transparent. The product shipped in Q4 2019, and is evolving as we speak.

Our dev tools design team at Argo worked in close coordination with the development team at the client, who managed to stand up and ship the Lab quickly, carefully prioritizing features along the way. Since the initial release, we have been working on future designs, conducting user research to align with feedback and future needs.

Much of my attention has been directed to Zero Iteration, which is an API simulator that is part of the Lab, providing quick turn-around for running a development project on a device, or in a simulator. I have also conducted user research and contributed to product strategy.

Starting programming at age 12 in BASIC, followed by some Pascal and Forth, I arrived at the C programming language a few years later. I had saved my Deutsche Mark allowance to mail-order the Mark Willams C compiler from America. By the time I tackled C, I had some useable programming experience, and I had found my way around a handful of operating systems.

But C was a different beast. This was intimidating. There was another guy at my high school, two grades above me, who was a C programmer then, and my teenage self considered him a hardcore engineer. (Most other teenagers there did normal teenager things.)

The C compiler came with printed documentation. I bought a few books, and dug into C programming, to get low level video buffer access to render fractals at last.

I did not become a professional software engineer, but I eventually got the fractals working.

Learning programming from scratch is a matter of understanding basic concepts, successfully using them to move to the next concept. The concepts get more complex, but one’s capability to understand them also grows, and the results get more interesting.

Besides the actual programming, there is also the “ceremony” around programming – using the tools, understanding the workflow, understanding and tapping into a variety of systems that multiply a programmer’s power. Today’s ceremony includes using IDEs, all sorts of APIs and things like source control. In the smaller world where I was burrowing into Mark Williams C, there were Unix shell commands, makefiles, the debugger, Emacs, the target OS calls to contend with.

C, as a language, was only incrementally harder than the other more high-level languages I had encountered before. The ceremony of C development was a lot harder – getting the tools tamed to actually start and compile a project took me a while. I felt like I was trying to find my way through a maze of invisible walls: something didn’t work, and I didn’t even know the word to describe it.

Eventually, one overcomes these problems and one moves on to greater challenges, leaving the previous ones behind.

As a designer today, I am working with in-house software developers who are building developer tools for a new technology platform. These tools are meant to be used by other developers, anyone who wants to, to build products for the new platform. I am one of several designers on the team, contributing tool UX and structure.

As part of a research project, I formally interviewed a number of the in-house tool developers, passionate engineers with sophisticated skills thriving at their mission.

I also found that these sophisticated engineers tended to look at the world through the lens of their own skill level. While solving hard problems on a daily basis, their command of their own development tools and overall ceremony was mostly effortless, like a professional musician’s command of their musical instrument. These engineers had left their invisible walls behind a long time ago.

Contrasting the mastery I encountered in the interviews, from my own perspective as a self-taught programmer of lower proficiency, today there in fact are invisible walls in the tools that we are designing and building together. Again I sometimes struggle to even name them.

From my perspective as a designer, finding and removing invisible walls is a non-traditional challenge of assumptions around contextual knowledge. Someone having the right contextual knowledge (internal tool developer, or a sophisticated customer) can navigate the walls. To this select group, the walls are just small obstacles, and quite visible. This proficient user doesn’t un-learn their sophistication though – they can’t take the perspective of not knowing.

We want “external” developers as new customers and as new contributors on our platform. To them, the invisible walls are barriers.

These external developer learn about the platform and are interested enough to look at the tools. They then encounter their first invisible wall. They know the game: “There will be more walls – find your way through, and reap the rewards.” Such a developer decides to take on the challenge and invest the time, and they eventually become proficient and build cool things.

Or not: they walk away. Everything else wants their attention too.

To me, tackling the invisible walls is the unique challenge of tool design on top of the usual project dance of requirements, constraints, politics, structure, mess and beauty.