Future is invisible

27 Jan 2016 by

Notes on O'Reilly Design Conference

This was the first year that O’Reilly organized a design conference. It was held in the best place possible which is San Francisco, the hub of many tech giants and startups alike that have been disrupting every other industry. The conference was mainly targeted towards people in UX design including interaction designers, visual designers, design researchers, developers, program managers and so on. I got the opportunity to attend this conference since I was in US for project work and Dr. Reddy’s is kind enough to sponsor it, I am grateful of them.

 

Fort Mason Centre where the conference was held

The theme of the conference was ‘Design the Future’ and it delivered on this in many ways. There were talks and discussions about design as a practice in general - what kind of future we could build together for design and with design. Topics included women in design, design education for underserved communities such as African Americans and Hispanics, designing for enterprise and other industries that are yet to be benefited by design ex. Healthcare, Government, Enterprise apps etc. Few topics were more specific about interaction design and user experience design.

The overarching takeaway for me as an interaction designer was that the future is ‘Zero UI’. The time when mobile phones became common and “Smart phones” started coming in the market, efforts were directed toward cramming all the possible functions and features in user’s pocket. Once we perfected the multi-touch displays and as the displays became cheaper we started seeing screens everywhere. Even in places where it might seem counter-intuitive such as the climate and stereo console of cars where user should be able to control these things without having to looking at a screen. Now the movement seems to be towards embedding the ‘smartness’ into things surrounding the user and decentralising the intelligence. That is why we hear terms like Internet of Things (IoT), ambient interfaces, gesture interfaces, natural language interface, augmented reality and virtual reality etc. The efforts are towards reducing the number of screens and having intuitive interfaces by reducing ‘interface’ which is the middle layer between user and goal that he/she wants to achieve, bringing it closer to direct manipulation. I will right more in details on the same topic referencing to the talks and workshops that I attended.

Making Zero UI

A design framework for invisible interfaces

(Talk description)

 

Andy Goodman introduced us to this term ‘Zero UI’ and he talked about his team’s efforts in creating a framework for these invisible interfaces. He says,

"Figuring out the rules for designing invisible interfaces will require a leap of imagination, skill, and knowledge for designers, a jump almost as big as the leap from the 2D world of print design to the 3D world of web design. Now, instead of linear flows with defined sets of actions and outcomes, we must start to think in “4D,” as we orchestrate and choreograph scenarios in which the actor can do just about anything in any direction.”

The overarching takeaway for me as an interaction designer was that the future is ‘Zero UI’. The time when mobile phones became common and “Smart phones” started coming in the market, efforts were directed toward cramming all the possible functions and features in user’s pocket. Once we perfected the multi-touch displays and as the displays became cheaper we started seeing screens everywhere. Even in places where it might seem counter-intuitive such as the climate and stereo console of cars where user should be able to control these things without having to looking at a screen. Now the movement seems to be towards embedding the ‘smartness’ into things surrounding the user and decentralising the intelligence. That is why we hear terms like Internet of Things (IoT), ambient interfaces, gesture interfaces, natural language interface, augmented reality and virtual reality etc. The efforts are towards reducing the number of screens and having intuitive interfaces by reducing ‘interface’ which is the middle layer between user and goal that he/she wants to achieve, bringing it closer to direct manipulation. I will right more in details on the same topic referencing to the talks and workshops that I attended.

There are quite a few challenges in designing for these invisible interfaces. I am noting down few of them here:

  1. Clashing Inputs: How do you filter signal from noise?
  2. Social Priority: How do you decide which user to listen to? Classic example is who has the control over the tv remote controller? The home owner or the guest?
  3. Understanding intention: How do you understand the intention behind the words or gesture that could mean different things in different contexts?
  4. Communicate error state: How do you let user know something has gone wrong? and more importantly why the error has occurred and what can be done to resolve it.

These are very interesting and difficult challenges to solve. As he says, as designers we will need to broaden our horizons and collaborate even more with people having varied skill sets.

Direct manipulation is broken

Why the IoT asks consumers to think like programmers and the UX challenges this creates

(Talk description)

In this talk Claire Rowland talked about how in the past everything was interfaced by direct manipulation where users got immediate feedback for the action they have taken on the system.

 

But Internet of Things breaks that direct manipulation by making users interact with abstract version of their physical surroundings. This takes place in 3 ways and these are the key benefits of having Internet of Things:

  1. Remote control: Displacement in space, where users can’t see the system response to the action taken.
  2. Automation: Displacement in time, where users needs to anticipate their future needs and program for it.
  3. Flexible, multi-purpose hardware: Displacement in function,  where users might use a particular device in a way where the behaviour of the device was appropriate for the original use but not what the users used it for.

The solution to these problems seem like making things even smarter. But it is not as simple as that. As more devices get interconnected, there are more number of edge cases to be taken care of.

 

... the cumulative complexity of a bunch of simple things - regardless of how delightful, simple and desirable they might be - will soon exceed the ability of humans to cope.”

— Bill Buxton

She then talked about how using these IoT systems is more like programming, which seemed very apparent to me as users need to think in terms of “If this happens then do this, this and this or else if that happens then do this”. This is too much of a work for consumers who want the thing they bought just work right out of the box.

There are no direct answers to this problems, Claire shared few examples of the systems that have done some good work in minimizing these problems but there is still lot to be done. I was happy to walk out of the session with more questions in my mind than answers! Her delivery was humorous and engaging, right up my alley. You can have a look at the slide deck that she used here.

Your friendly robot companions

Design for messaging and chat

(Talk description)

We have seen rise in services that use text/chat as their interface. This has many benefits for users as they don’t need to switch between apps to get things done and also do not need to learn new interface every time they decide to use an app.

On the other hand this creates many new challenges for designers. We don’t have control over the typeface, the color, branding.. basically no control over the interface! The only thing that we have control over is the content and how the service reacts to user inputs and what all it can do. It is design by copywriters, by coders and by business analysts.

Variance in chat UIs

 

There are different set of challenges that we have to consider while designing for chat bots which are more user-centered that UI designer centered.

  1. Discoverability: How does a user know what your app can do?
  2. Natural Language Processing: How can your app best understand free from input?
  3. Verbosity vs Terseness: How much or how little your app say?
  4. Personality: How much, where and when is personality appropriate?
  5. Input Validation: How do you provide feedback on a line of text?

There are are no best practices or standards for these things yet, but there are few things that we can keep in mind while designing chat bots.

Design for following cases:

  1. Initial Prompt - How you let the user know what you can do - progressively over time.
  2. Incorrect response - How do you guide user to get correct input.
  3. Quit - How can user terminate the ongoing process at any moment.
  4. Help - How can user access help in the middle of the conversation
  5. Timeout - What happens when you don’t get a response? How long do you wait? Does user need to start all over again at the later point or they can pick up the conversation right in the middle (not natural).
  6. Correct response - This is the ideal case.

Some things that you should avoid:

  1. Rhetorical questions
  2. Open ended questions
  3. Multiple questions in one message

What are the best practices? None! We have to discover them. Lot of unknowns but lot of opportunities. Ben was really passionate about the topic and had a lot to share. He went overtime and was playing chase with the staff to finish his talk. It was really fun to hear him and inspired me to try out this new way of designing.

 

Fundamentals of voice-interface design

(Workshop description)

Tanya Kraljic from Nuance communications took a workshop on creating voice interfaces. Let’s start with the what prep work we need to do. First we need to understand the extent to which we want to voice-enable our app. There is a range of technology from basic to advanced to achieve this.

  1. ASR: Automatic Speech Recognition - “Tell me in my words”
    This system takes specific voice inputs and acts on it. It has a pre-built algorithm which converts voice to text. It could be locally hosted as it needs to process limited number of commands.
  2. NLU: Natural Language Understanding - “Tell me in your words”
    This system interprets what user says. It uses customized algorithm where variety of commands are mapped to intent. These systems are large in size and need to be hosted on cloud.
  3. Dialogue: State + Context management
    This type of system remembers what you said last even if you talk to it at later point. It is proactive in nature and moves conversation forward.
  4. Intelligence: Personalisation through external context
    This is what we might call Artificial Intelligence. This system not just remembers state and context of the conversation but has access to external context (usually in mobile apps where the hardware could inform the users context) and personalises the response.

The usual flow of how information is processed is - first ASR converts voice to text, this is called text string literal as it just literally translates what’s said. Then NLU looks for ‘Intent’ and ‘Context’ in the text string. It hands over them to App logic which executes the command and gives response to the user.

Plan your use cases

Use cases need to provide a mental model of what is possible with voice. Deciding on what is in scope and what is not? and how do we communicate that to the user? Are we going for breadth or Depth? that is, doing multiple things or doing one thing in detail? Are we going for ‘One-shot’ responses or ‘Transactional’? that is, we respond to what user said and the conversation ends or do we continue engaging on the same topic? What is the level of personlisation we are aiming for? and so on.

Sketch out your NLU Framework

Once we know what are going to be the features of the app and what we are going to voice enable. we have to start building the framework. We have populate ‘Samples’ which is what your users will say if they want the app to do a particular task and match it to ‘Intent’ which is what your app can do.

This can be done in many ways. You can launch a beta version and collect data from what people are actually saying to your app. There are other channels such as web searches and surveys but the samples might not be as real and as large in number as beta testing. The easiest option is to go with your best guess.

There are two approaches that you can take to do this.

  1. “What would you say to the robot?”
    This is useful if you are starting from scratch and are in discovery phase where you need to understand your users’ needs.
  2. Here’s what my robot can do. How would you ask for that?”
    This is easier to do. You already have an app ready and know its features and you just want to voice enable it.

One of the biggest challenge in voice interface is you cannot constrain user’s input like you can with visual interface. Good example of this is a toggle switch where the only input you are going to get is ‘Yes’ or ‘No’. Due to this error handling become very important. Below are few of the errors that you should design for.

  • Didn’t hear (No input)
  • Didn’t understand (No match)
  • Not clear (Ambiguous)
  • No functionality
  • App/business prevents that
“Everything user experiences is part of their conversation.”

Just because the app is voice enabled does not mean visuals are not important. We have to think transmodally and frame the scope of the app with visuals. Where do you place the mic button will hint at what user can do with voice. It also important to provide conversational feedback to let the user know that the app is listening, processing and responding to the input. And finally have a consistent and coherent point of view because ‘No personality’ is bad personality.

Robots

And when talking about future how can we leave out robots! Dan Saffer gave a short talk at Ignite Design which was part of the conference on designing for robots. It’s better to hear it in his own words.

 

A 5-minute Ignite presentation from the 2016 O'Reilly Design Conference in San Francisco.


The three days passed in a blink of an eye and left me inspired. My brain was on a different kind of high and I could dream so many possibilities and also interesting challenges to tackle. My curiosity for design and technology both were satiated.

To build a future we need to domesticate/humanize the machines like we did with wolves and today we have our best friends, dogs.