“Hi Mr Product Manager, it’s your Development team here.
Instead of that feature you asked us for, we’d like to propose a new project. It’ll take a good chunk of the team about 9 months to complete. Customer-facing benefits? There aren’t any — not in the short term, at least. In fact, the project is only successful if customers don’t notice that anything has changed."
"Hey, where are you going? Come back, Mr Product Manager! We haven’t even told you it’s in a risky high traffic part of our product yet…"
Okay, there may be some artistic license at play. The truth of the matter was that the OptimalSort participant interface had been around a long time.
Despite gamely attempting to extend it “just one more time” to add new capabilities, the team was struggling to get work done in a timely fashion to a level of quality we were happy to release.
It was time to do something and, whilst there were items on the product backlog that were more immediately attractive, we made the call to prioritize a rewrite of OptimalSort.
A history of OptimalSort
If you’ve been a longtime user of OptimalSort, you’ll probably remember this user interface (UI) that we rolled out in 2015.
Back then, things were pretty simple for participants. They were asked to sort cards into categories, which they could create themselves (in an open card sort), or rename (in a hybrid card sort). That was pretty much it.
Come 2016, we rolled out some more features. Participants could rank their cards in order of priority.
And you could even launch image-based card sorts, in which participants are given a set of images instead of plain card labels.
Then came category card limits, which allows researchers to set the maximum number of cards that can be sorted into each group.
After that, the unsorted cards indicator was introduced to help participants gauge how far along they are in the study.
With all these new sparkly features, we thought we’d make card sorting on mobile a much more pleasant experience for people participating in studies. There were also some techy behind-the-scenes strains that needed some attention too.
So, how did we go about it? Slowly and carefully. OptimalSort is one of our most heavily used and feature-packed tools, so it was critical that we neither eliminated capabilities nor impacted the experience for participants.
Ultimately that boiled down to a combination of painstaking feature analysis, considerable development and testing effort, and meticulous testing and comparison between the old and the new interfaces. On that last note, we were helped considerably by some recent additions to the Optimal Workshop family, or as I like to call them: uncorrupted test subjects!
Once we were satisfied that all was well, we commenced a steady incremental roll out across new OptimalSort studies. Participant completion rates were studiously monitored, our Customer Success team waited for the slightest hint of trouble, and at all points we had the ability to revert back to the old UI at a moment's notice.
Thankfully though, the sleepless nights and increased coffee bills paid off. The migration was a success and everyone is now using the new, improved (and functionally identical) interface.
What the new OptimalSort means for you:
- Improvements for research participants completing studies on mobile and touch devices
- A reduction in abandonment rates
- More features added to OptimalSort in the future, as it's now faster, better and stronger behind the scenes!
"Hi Development team, it’s your Product Manager here.
Nine whole months to get a codebase that’s up to date, extensible and ready for whatever the future has in store — and our customers won’t be impacted by the changes? Sounds like a plan. Go for it!”