Approaching the complexities and subtleties of an automotive UX

Mark Wyner is a freelance UI/UX designer based in the USA. For over 20 years, Mark has been working in the UX-sphere partnering with Fortune 100/500 companies and nonprofits to craft meaningful experiences for digital UIs and ecosystems in new technologies. Ahead of his presentation at UX New Zealand 2017, Mark shares his journey working on an automotive UX project.
In the early years of my career, every user experience I was designing comprised a single modality and a single visual UI (with accessibility accommodations). Recent years have yielded an ever-expansive ecosystem of modalities, including products with smaller UIs, and even without UIs at all.
With these new modalities, the process of building a meaningful UX has become quite complex. But I spent the better part of two years working with a brilliant team on an in-vehicle infotainment (IVI) system for an automobile client (under NDA), and I gained some valuable insights.
IVI systems are basically computers in cars that run mobile connections, GPS mapping, music, gauges, and more. Because they’re so rich with data inputs and outputs, there are countless opportunities to explore many modalities (like displays, lights, audio systems, and haptic modules) for interaction design, with lots of various types of data.
The overall goals of this project were to conduct generative research on the target audience, explore features/technologies to fulfill the needs/desires of that audience, and conduct evaluative research/testing with a partial prototype. Our research and test results would help inform the client about which features would be meaningful and successful in their automobiles.
Crafting scenarios and a user journey
We began with a number of scenarios that would help us define a spectrum of prospective feature sets. Our goal was to cast a net broad enough to use all potential features and input/output modalities.
Upon crafting all of the scenarios, we compiled them into an abbreviated user journey spanning part of a single day. The narrative itself didn’t need to be representative of a realistic day; it only needed to be all-inclusive of what we wanted to test.
This would eventually be crafted into an immersive prototype for testing and showcasing. The first step, however, would be to outline the journey that we would use for the duration of the project.

Once we had our journey map, we could begin considerations for how to get from beginning to end. We faced a number of challenges, but three, in particular, garnered a lot of our attention:
- Assessing an appropriate balance of control between driver and automobile (and its autonomy)
- Working with a potential for multiple modalities (aural, visual, and haptic)
- Understanding and accommodating cognitive load while driving
Balancing control between driver and automobile
First, we needed to decide who would be in control. Did people want a partner or a manager? Autonomy or dependency? An anticipatory or reactionary system? Did people even want their automobiles outsmarting them?
A great way to answer these questions was to use familiar metaphors to help us understand the relationship between driver and car, and ultimately the level of control the driver has. We gave the metaphors agency, essentially converting them into agents. These are personas, complete with names and avatars.
Let’s look at how two agents — a dog and a butler — might help us compare two types of user experiences.

The balance of control is easily differentiated between them, and the relationships are easily defined. The dog will react to our needs but will require training to learn about our preferences. The butler will anticipate our needs but will need permission to learn our preferences.
Accommodating multiple modalities for input and output
With a journey map and agents in place, we could begin to explore options for interactions between driver and automobile. This would require our taking inventory of input/output modalities, then connecting the dots between them.
In this case, we looked at inputs like mobile devices, wearables, sensors, internal controls voice, and even IoT inputs like GPS and external infrastructure-systems. Outputs might be visual displays (including a Heads-Up Display, or “HUD”), haptic feedback, aural cues, mechanical devices, and environmental systems (light, temperature, etc.).
Every scenario presented myriad challenges, each requiring us to define the source of input and the method of feedback. The perfect activity for this was “How Might We?” (HMW). This is one of my favorite UX activities because everyone can be involved and it yields rapid results.
Let’s look at how this works, using a scenario when a drawbridge would disrupt a mapped trip. We would begin by generating a number of HMW questions around the scenario:
- “HMW alert the driver before she sees the obstacle?”
- “HMW know about a bridge draw in advance?”
- “HMW help the driver reroute mid-trip?”
Once we had generated a series of questions for each scenario, we could begin to map out input/output modalities. This would help us build mental models for use with multiple modalities—including defining the primary function for each of the four visual displays—creating the groundwork for a didactic UX that would dramatically reduce a driver’s cognitive load.
Storyboarding and user testing
User tests would be a critical step to see how our features would be received by the target audience. It was too early to build a prototype, but storyboards would help people visualize each scenario on the journey map.
Our facilitator walked participants through our storyboards, one for each of our two final agents, which enabled participants to see themselves in a role with an automobile. An example scenario we explored was traffic and scheduling. Based on calendar and traffic data the automobile would ascertain that its driver would need to leave earlier than scheduled to arrive on time to her destination. The automobile could react in two different ways, depending on which of the aforementioned agents was being utilized. Sherman, with a dependent attribute, would inform the driver and ask her if she wanted to leave X minutes early based on new travel calculations. Collins, with an independent attribute, would inform the driver and let her know that it has already prepared to leave X minutes early.
Reactions by participants to these A/B scenarios helped us understand which features and what agent would be most desirable. The idea was to get a pulse on the aforementioned balance of control.
Building a dimensional prototype for testing cognitive load
We now had a pretty solid understanding of where we could go with our IVI system, and we were ready for our journey to be realized as a working prototype. We wanted to test the features in a simulated driving experience, to see how the UX would hold up with a high cognitive load. The prototype would need to guide participants through our journey of an imaginary day.
We used a virtual world from a Unity game environment to emulate the view through a windshield. We then mapped out a predefined path through a city that would enable us to fully realize our entire journey, introducing characters and other environmental variables as necessary. Participants were seated within a virtual automobile cockpit, complete with a steering wheel, pedals, and four visual displays.

Our facilitator sat in the passenger seat and helped guide the test drivers through the journey. They had to pay attention to traffic, pedestrians, traffic lights, and other components of a real driving experience while interacting with an array of prompts that were embedded into the game environment, based on the scenarios in our timeline.
While this environment was a simulation, it had the necessary components to help participants get a feel for using our features while driving. This was far more realistic than storyboards, wireframes, or even a single-screen experience. We learned a lot from these sessions, and our findings guided us deep into confident recommendations for the full IVI system.
Crafting a UX for an automobile environment is quite complex. Truly understanding the relationship between driver and automobile was mission critical and a determining factor in our success. Adding-in the complexities of multiple modalities and a high cognitive load placed a tall ask in front of our team. While there are many ways to approach a project like this, our approach proved to be successful. And we learned a lot that we could carry with us moving forward.
Want to hear more? Come to UX New Zealand!
If you’d like to hear what Mark has to say about voice-based UX, plus a bunch of other cool UX-related talks, head along to UX New Zealand 2017 hosted by Optimal Workshop. The conference runs from 11-13 October including a day of fantastic workshops, and you can get your tickets here. Got a burning question you’d like to ask before the presentation? You can tweet Mark here: @markwyner