Can you provide some evidence or reading on this? I understand A/B typically measures _conversions_, but couldn't you have quantifiable proxies for usability? For example, things like form abandonment and time-to-complete a task.
To be clear, I'm not contesting the value of usability tests involving watching actual users. Our UX team does this with every release and it's invaluable. But I would think data from A/B testing would only help inform our decisions.
Despite popular belief, A/B testing doesn't actually work for usability testing. So successful A/B testing usually means bad usability practices.
reply