I’m totally fascinated by automation — everything from triggered email campaigns to robots who do things that humans once did. When I talk about automations, I’m thinking of anything that takes a human out of a task and gives that task to a machine. We have doors that open for us as we walk toward them. We use washing machines instead of washboards to clean our clothes. I haven’t killed a plant in over a year because I have recurring notifications set in my to do list to remind me to water them. But sometimes, removing the human element from a task causes more problems than we intend.
Head in the sand automations
In the past year or so, a few people in my neighborhood mentioned that they started getting notices thanking them for recommending a local business on NextDoor (it’s sort of like Neighbourly). This seemed to be NextDoor’s way of announcing a new feature in their app. The problem was my neighbors didn’t recall making any recommendations. And in many cases the “recommendation” they were notified about was for a business they had warned people to stay away from due to negative experiences.
At first I assumed this was a case of sentiment analysis gone wrong, but after a little digging around in their FAQ I found out that NextDoor didn’t even bother with sentiment analysis. When NextDoor automated pulling business reviews into their new recommendation feature they treated any mention of a businesses as a positive review. They didn’t even attempt to parse them as positive or negative. This led to many frustrations and required the user to go clean up the mess that NextDoor made.
We often ask our users to clean up our messes. We assume they want notifications when they don’t. We automatically add customers to mailing lists when they purchase, not even asking if they want to hear from us again. We send unnecessary notifications early in the morning and late at night. And then we tell our users that it’s their job to opt out. Can we really not be bothered to think our notifications through? Do I need to get a pop up, email and text letting me know that something trivial needs my attention? I really don’t.
Fail funny, fail often, part 1
Some automations create negative experiences. Then there are some that are failing, but at least in a funny way that doesn’t hurt anyone or inadvertently give praise to businesses that rip people off.
I opened my Facebook app a couple of weeks ago and it wanted to help me put together a little photo montage of photos from my camera roll on my phone. The first photo it wanted to add to this montage was my latest photo — a reminder to myself about a pair of sunglasses I liked, with the overlay “A Night Out”.
The other recent photos I had on my phone documenting my big Saturday “Night Out” were three pictures of my sleeping dogs and one adorable photo of my whole family wondering if they could have the steak I was eating.
This was pretty funny to me, as my Saturdays over the years have gone from chasing rock and roll dreams to chasing rescue dogs around the yard. But think for a moment: What if the most recent photos were of my totaled car from an accident scene? What if the most recent photos on my phone were screenshots documenting harassment? Remember, these are photos from my phone, not ones I have actually chosen to share on Facebook yet. And those examples, documenting harassment and photos of my smashed up car, have at one time been the most recently taken photos on my phone.
Facebook should really know by now to be careful with their automations. They have been called out for, and acknowledged, showing photos of relatives who have passed away in a “let’s celebrate your year!” feature. They have thanked me on Mother’s Day for my hard work as a mother. But I’m not a mother, nor can I have children. Facebook is continuously working on its algorithms, but in the meantime, who is its test subject and what are these subjects being put through?
Fail funny, fail often, part 2
In her special, “Live”, comedian Tig Notaro talks about her traumatic experiences with illness, cancer, breakups and the death of her mother. In one segment, Tig describes the questionnaire a hospital sent to her mother after her mother died in the hospital, asking how the stay went. Tig quips, “Not great.”
She put it simply: there could have been two lists. One for people who are alive and one for people who are not. This should be the base line of segmentation for automated mailings — No Questionnaires to Dead People.
Automating for good
There are more stories than I could ever tell about how automations fail when we remove the human element, but fortunately the number of stories I can tell about automating for good is growing. For instance, SeamlessDocs, is working to automate the processing of government forms which results in less mistakes, ensuring forms are complete and allocating taxpayer money to more important things than printing paper, not to mention saving a ton of trees!
There are fun automations that makes people’s days brighter, like Spotify’s Discover Weekly feature. Although I prefer the intimacy of a mixtape curated by a friend rather than a machine, the process behind Discover Weekly is pretty amazing.
In New Zealand there are pizza delivery robots! In the US we can order pizza via Twitter + emoji. As cool as that is, there are implications to these fun automations. In order to Tweet for pizza, you do have to set up an account with your address and credit card details of course, but you’ve also had to have placed an order via their online ordering system to establish a preferred order. Then what if down the road you’ve forgotten to update your info and your pizza goes elsewhere? What happens if you tweet a pile of poo? This would be a massive personal problem if I could tweet a dog emoji at a rescue facility and they could just send me dogs on demand. These details highlight why even when we’re automating for fun or for good, when you remove the human element things can go wrong or just simply not work.
Ethical and empathetic design
In the design world, we do have some published thoughts around ethics. We, for the most part, have great Codes of Conduct. UXPA has a code of ethics that focuses heavily on how we treat our test participants. There are groups who research Robot Ethics. But when it comes to our automations, sometimes we’re not even hitting the baseline in our little tech bubble.
Design is a powerful medium, and we should always consider how our work will affect people. Are we automating solely for our own benefit, or do our automations benefit our users, too? Do we simply design what people tell us, or do we question problematic automated solutions?
Remember when designing automations to play it forward and really consider how and when your automations will be received by the person on the other end. When removing the humans from the task, it’s not an excuse to forget the human on the other side of the machine.
Want to hear more? Come to UX New Zealand!
If you’d like to hear more about Jenn’s fascination with all things automated, plus a bunch of other cool UX-related talks, head along to UX New Zealand 2016 hosted by Optimal Workshop. The conference runs from 12-14 October, 2016, including a day of fantastic workshops, and you can get your tickets here.