How much of this is “nudging” vs. “clearly explaining trade-offs”?
I’ve not done any rigorous research, but I’ve participated in projects that resulted in dramatic shifts towards customers choosing what the dev team thought was the “best” outcome, just by altering wording, or making “dangerous” choices harder (such as by requiring more clicks to enable).
A nudge has to push the user towards one certain decision over another, that's the whole point. It's opinionated. The factors I listed aren't inherently opinionated, we could've tried to improve them without pushing the user to a specific choice:
E.g. speed: We could've removed earlier parts of the onboarding to make the overall experience less long, or compacted the UI so it was visually easier to skim the choices.
Expertise: we could've assured the user before the choice, that all options were good cause we're the experts and that we would've give you a bad option - so don't agonise.
Cognitive load: We could've reduced the info we showed about each option, or hidden it away behind a modal, or re-written it in plain english. The legal team told us we had to use the legal descriptions of the choices, which included technical language.
Confusion: We could've made an visualisation of the impact of their choice, that changed as they swapped between each option - showing them them a more tangible outcome of their choice. It was a complicated concept to get, so the addition of a visual aid instead of just written descriptions might've helped.
To be clear - I'd be surprised if these things would've worked, and I'm certain setting a default made a difference. The point I'm making is that I don't know for sure how much of a difference. The change to implement the default, by my eyes, also improved the overall design in these other ways as well. We didn't isolate it down to exactly what made the improvement, we were just happy it happened.
The point I'm making is you could quickly skim read this story of a team stuck on a problem, who after implementing defaults found their conversion rates jumped 11x holy shiiiiiiii- and it sounds like it's all thanks to nudge theory. It's exactly like a case study you'd see in a co-design agency's portfolio.
But in the actual real messy world of designing interfaces, it's just always a bit more complicated than that. No change is truly isolated, tested in a controlled, academic fashion. You just design your best shot each time and see what works. Because of this, it's hard to truly definitively say an improvement was because of a nudge. Best I can do is, "I mean probably" haha.
I was working on a fintech project (gonna be vague as it's not yet released).
The legal team told us we couldn't use default choices anywhere, as it could count as giving financial advice. Fair enough. So we designed the onboarding, and there was this choice the user had to make before we could create their account.
During testing, we found people were getting really stuck on this choice, to the point of giving up. The choice actually had quite low impact, but it was really technical - a lot of people just didn't understand it. Which makes sense our users weren't financial experts, which was our target user. This choice was a new concept for the market, so we couldn't relate it to other products they might know. The options inside also had quite a lot of detail when you started digging into them, detail we had to provide if somebody went looking for it. Our testers would hit this choice, get stuck, feel the urge to research this decision, get overwhelmed, give up.
We spent so long trying to reframe this choice, explaining it better in a nice succinct way, we even tried to get this feature removed entirely - but nothing stuck.
Eventually after lots of discussion with legal we were allowed to have a 'base' choice, which the user could optionally change. We tested the new design, and it made a significant difference in conversion rates.
Huzzar for nudge theory! Right? Well, maybe. I think it's a bit more complicated.
- The new design was faster. There was less screens with simpler choices. It went from a 'pick one of 5' to a 'heres the default, would you like to change it?'. Was it just the speed that made a difference?
- The user was not a financial expert, and the company behind the product was. In some sense was the user just thinking 'these guys probably know more than me I'll leave it at that'. Imagine trying to implement this exact change on something the user is an expert in - say like your meal choice in an airplane. I imagine most people would think "How rude choosing for me! I'm an expert in what I feel like eating I want to see all the options".
- It had less of a cognitive load. Like the whole onboarding flow was already really complicated, just reducing the overall mental strain to make an account may have just improved the whole experience. E.g. if we had removed decisions earlier in the flow, would this one still have been as big of an issue? We never had time to test it, so I can't say for sure.
- Lack of confusion == confidence. For the users who didn't look at the options and took the default, did they just feel more in control and confident because they weren't exposed with unfamiliar terms and choices? They never experienced the urge to research.
Like at the surface level this new design worked great, so job done. But it's hard to say definitively it was because of nudge theory. I don't think you can really blindly say "oh yeah defaults == always good" and slap them on every problem - which is why the design-test-iterate loop is so important.
Unless the dev has the authority to make the decision, what he should really do is explain the tradeoff: “Hey marketing, I know you want this feature, but it’s gonna cause us to drop down in Google results. Is it still worth it?”
+ for mentioning Useful configuration options. Prominent examples include Last tab close and hiding the Disable Javascript option in Firefox. In both these cases, the decision seems to be driven by the preferences of a specific group of developer(s) rather than informed decision making or listening to users.
> The new design was faster. There was less screens with simpler choices. It went from a 'pick one of 5' to a 'heres the default, would you like to change it?'. Was it just the speed that made a difference?
If you're just going from "pick one of 5" to "pick one of 5 but there's a default", I wouldn't expect one or the other to be "faster". Was the new design more different than that?
As for the rest, I think the beneficial features of the design are predicted by nudge theory. "Providing a credible default reduces cognitive load and confusion on the path to a decision, as the user can just trust the defaults have been set up reasonably" has always been the theory for why nudges work.
So developer comfort takes precedence over user choice? Or are you saying options are just way more difficult to implement than they are worth (considering the small proportion of users actually take advantage of them)?
Thank you letting us know your thoughts; you definitely raise some valid points on how much control we should give to our users. In our case, our philosophy is that our product should assist users in making the correct decisions, instead of forcing decisions upon our users.
One middleground we are exploring, though, is making it increasingly more difficult to bypass our interventions each time.
That applies to when a choice must be made before the user can proceed, or when making once choice has an associated opportunity cost that prevents/discourages the user from making other choices. If the user is presented with sane defaults from the start and can also try toggling a checkbox in a settings UI to instantly test alternatives, the study does not apply.
"...but if you’re not familiar with it then enjoy the research and storytelling of Barry Schwartz who discusses how too many options can not only lead to your customers making no choice, but (counter-intuitively) resenting the choices they do make."
That's very true. And I think the best compromise/solution is to hide advanced features/choices so that one click will still produce a great result, but if someone needs more options that ability is still there.
This kind of data has a different kind of selection bias.
When you're looking at user actions, you can't discern two different kinds of behavior: (1) The user is doing what he wants to do; versus (2) the user is doing something because he can't figure out the alternative, or that alternative is inconvenient/unwieldy.
So to a certain extent, the results of such studies can server to entrench bad design decisions, rather than improve the system. The developers always have to look for every possible alternative explanation for a user's behavior.
Asking people what they want is a very small part of it. You research by watching how users interact with the system, then you take risks by experimenting and seeing if changes improve the outcomes or not.
It is a business decision. Deliver a feature faster to 99.9 percent people as opposed to later for 100 percent. Neither is right in all scenarios which makes the whole argument pointless without specific scenario. There's a slider you can move with support percentage and time efforts required to make it happen. It is about balancing trade offs. Taking one fixed stance will just make you inflexible.
I’m a consultant myself (UX research and strategy), and I like several of the points in this article - for example, why churn rates are higher for people who want one objective versus another. This is definitely an important distinction. Not all users are the same.
Offering users the features and benefits they’re looking for is kind of tricky. In enterprise settings, the users are more likely to know what they need. When they’re right, they just need their idea designed and built. Where it gets difficult is when they ask for a fix of a symptom, but the root cause is what really needs fixing. For instance, people might ask for a surface-level improvement on a step of registration or onboarding that shouldn’t even be there because it doesn’t give them any value (like confirming an email address or even asking for signup for a product they’ve never tried). Or, two different users (who map to different personas) make requests that contradict each other.
I'm not making that argument, though that is often the reaction I get.
My argument is that you should understand when you're choosing to compromise the user experience, why, and how compromised it is. For example, you should know roughly how much worse the user's battery life is because of your decision. That makes the decision an informed one, based on your goals, budget, time to market, and so on.
To claim a decision born of necessity comes with no compromises is delusion. That delusion might not kill you but it points toward muddled thinking. That's dangerous at the best of times, doubly so for a startup.
> Or how when you X one of those panels on some topic you have no interest in and it says "Got it, message hidden" then pops back up in a few days?
And
> Clicking X means NO.
While clicking X might mean No for all circumstances to you, that doesn’t mean it a the same for everyone else.
To handle more variety, you really want at two affordances: “never” and “not now”.
Note, when a user turns a feature off (your “never” case), now whenever you change that feature all the users who turned it off will now be left behind, even if the change or improvement would be appealing to them. So now you need a way to communicate the changes (even though they said “don’t show this feature”) and a way to change that preference.
I’ve not done any rigorous research, but I’ve participated in projects that resulted in dramatic shifts towards customers choosing what the dev team thought was the “best” outcome, just by altering wording, or making “dangerous” choices harder (such as by requiring more clicks to enable).
reply