Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Apple has always done this, and it's frustrating. Another good example is how they've always only had one mouse button on desktop, because they claimed that having two or more was too complicated for users.

Well ... the problem is that having more than one button is actually useful, and when the buttons are hardware buttons at least they are discoverable. Instead, there's lots of functionality hidden behind Control-clicking and Option-clicking and maybe more that is very hard to discover on your own (and this goes back to the pre-OS X days even).

Force touch is similar in that the overall discoverable interface is too simple, and then the added functionality on top of that is way too hidden.



sort by: page size:

You touched another interesting point. I remember reading somewhere that Steve insisted the mac shouldn't have a 2 button mouse for such a long time just because it would force developers make a simpler UI, where no 'hidden menus' could be created. The lazy option of just throwing everything under a 'right click' simply couldn't exist.

If you think about it, Force Touch could create that 'right click' in iOS, which I think is a bad idea for starters. We already have hidden gestures and tap & hold, so there's already enough stuff hidden from plain sight in the User Interface.

Let's face it: the popularity of these devices came from the fact that they are really easy to use to the average user (think your mom for example). Will she know about Force Touch and incorporate that in her interface discovery process ?


I'm going to have to disagree, not that discoverability isn't good, but that the addition button isn't worth the complexity.

It's a matter of expressiveness and bandwidth. You can do more pointer-related actions with multiple buttons, and they aren't confusing the way keyboard modes are, because you've got feedback on the display at the pointer location. Primary clicks do the important thing, secondary clicks give you a set of other options or an alternative action. It's very easy to pick up given its universality and its usefulness. A single button just requires you to press mode keys with the other hand to get that expressiveness needed in a reasonably intuitive, and powerful, pointer-driven UI.

Imagine if we didn't have multitouch and the only gestures we had on touch displays were tap, tap and hold, drag...


I've long found it strange that the same company that didn't want a second mouse button because they didn't want to hide functionality in context menus now hide so much functionality behind gestures with a varying amount of fingers. This started while Jobs where still at Apple, so it can't be blamed entirely on his successors.

Force Touch is an fascinating example because it's the equivalent of a "right click". The same right click Apple famously used to hate because of the very discoverability issues you have raised.

I think if Apple keeps Force Touch to certain specific use cases e.g. pop or contextual menus off app icons then it could be still very useful.


I really doubt the discovery of force touch itself is an issue.. it was in all their ads and they made a big deal out of it; when force touch can be applied, and for what purpose, is the bigger issue. The same problem apple has with option key. Right click is somewhat sensible: it works mostly everywhere, and gives you a “little menu” that you can equivalently find from the apps main menu. and it returns some result, always.

But option and force touch.. its unclear when its applicable, what it’ll get you, if it even went through and for option, where the resulting button even is


Oh Force Touch, this was one of the biggest UX fails of the modern Apple era. It all goes back to one of Apple's Macintosh era fails: the one button mouse. This was one thing that can be reasonably argued (as I do) that Jobs was wrong about.

Right-clicking became so ubiquitous and even consistent on Windows. You needed the same behaviour on Macs but had no second mouse button so got that by using modifier keys. Some apps it was Option, others control and so on.

The problem was that those modified clicks had no discoverability and less consistency than right-clicking. Windows users on the other hand learned that nothing bad could happen with a right click.

Force Touch as a similar kind of UX fail with no discoverability. There was (and is) no consistency with what a Force Touch means or should mean or even if it's an option at all.

But yes, Force Touch did make the iPhone thicker and heavier.


Right-click on Mac laptop has the same non-discoverability.

I'd argue force touch is more discoverable because it's very common for new users to press hard enough to accidentally activate it. Then you kind of play around for a few second until you realize how to do it intentionally.

A couple Macbook users I know have never once right-clicked and are amazed at how handy it is once demonstrated to them.


Force touch is even less discoverable than gestures. If it is used to distinguish between "simple" and "complex" actions in the way right/left click do with a mouse- as Apple attempted in several places in iOS- it's far easier to activate accidentally than intentionally. Overall the feature was a horrendous usability misstep, and it's not surprising it was quickly nixed.

I don't see how right-click and drag could possibly be natural or easy on anything but a mouse.

Apple introduced multi touch to solve problems just like that.


That's ok, but why limit triggers to button areas when 1-finger click anywhere on the trackpad can be left-click and 2-finger click anywhere can be right-click. Macs also have force touch for additional actions. There is probably a 3-finger click option as well. I don't have to know where the buttons are or look for them, no eyes need to be on the touchpad or keyboard, it's just an extension of my fingers.

The problems posed by multiple mouse buttons are very similar to the "hidden swipe gestures" people are complaining about elsewhere in the comments for this article. There's nothing inherently wrong with either - any more than there is with e.g., English being a language with thousands of "hidden" words that people have to learn and remember instead of limiting you to only the words visible in an autotext menu - but you have to think about how people learn them, how consistently they're available, how to accommodate people not knowing them or learning them at different rates, etc.

HN commenters tend to be power users of desktop-style UI more than touch or mobile-style UI which can make it hard to see similarities between them (right click conventions really aren't much more consistent or obvious than swipe conventions)


Personally I find gesture support annoying and its functions are better done with keyboard shortcuts - if you want to show the desktop pressing windows-D is much quicker and easier. The first thing I do on a new laptop is find the touchpad settings and turn everything off.

I find it ironic that Apple used to push the one button mice narrative as 2 buttons were overwhelming, yet now you are supposed to contort your fingers into doing a million different gestures on a touch pad.


It's just as easy to press harder "force touch" as it is to right click.

Sorry, but the Windows 95 user interface had the exact same discoverability issue.


Both of your examples are definitely worse on OS X than on basically every other OS in existence. And as proof, OS X is slowly but surely moving in the direction away from at least one of those, and I'd bet that within two or three revs of OS X away from both.

Menus just aren't that hard to hit otherwise all clickable items in a program should be on a screen edge. In fact, according to Fitts's law, they should be jammed into the corners of the screen since that's even easier to hit than a screen edge. But they aren't because it's not that hard for anybody who's bothered to use a pointing device in the last 20 years. More important than that, trackpads are becoming the de facto pointing device on Apple sold computers and Fitts's law works differently on a pad vs. a moving device like a mouse. The edge of the touch surface is the infinite target, not the screen. Since touch devices are not 1:1 mapped to screen area, all clickable interfaces should be at the edge of the pad relative to wherever the cursor starts.

Likewise, a second mouse button turns out to have been a great idea, so great that decades later Apple guarantees that they not only support right mouse buttons, but their default mouse not only ships with support for it, but they even have managed to cram a touchpad into it and their trackpad recognizes a two finger tap as a right-click. Why? Because decades into the great GUI experiment it finally dawned on somebody that interface complexity requires more than one button -- otherwise half of your interface gets buried behind a modifier button (or two or three) or a pile of menus and your sole button.

I've watched many dozens of users move to OS X and one of the first things they ask is "why is the menu bar way the hell over there?" -- with pointing to the top of the screen (or to an entirely different monitor depending) usually preceded by a series of questions about how to do some function that is clearly on the menu bar, but since it's not coupled to the actual program window, they assume it has a decoupled function from the program and don't realize what they are looking for is there.

It's an embarrassingly repeatable user interface experiment that's left me convinced that the only reason it's still part of the OS is to differentiate OS X from Windows.

Physically decoupling software interfaces from the software is almost always a bad GUI idea if you can help it. It repeatedly confused users, particularly new users. It's like putting the steering wheel of your car in your house, and the gas pedal in your back yard shed.

Everything from MDI to full screen apps are now slowly creeping into OS X because time and time again it's shown that users find those alternatives more usable than the old Apple way standby.

Lets stop tooting this "everything Apple does in UI is best" horn. Lots of stuff Apple does in UI is great, it's even the best, these things are simply not.


Agree. The funny part is that Apple's touchpad driver is open-source (under a scary Apple Open Source license, but still). You can read it. Or at least, you could circa 2006.

I've been meaning to hack on the Linux driver since back then, but honestly always expected someone more competent to try a touchpad-driver-rewrite instead. Maybe they have, but we still seem far behind, and I'm pretty sure it's a software issue.

That being said, physical buttons are superior. Clicking one means you know a click registered, so even if the software is doing something stupid, you're assured of your interactions. I don't understand why the buttons have gone away.


In the early days of Macintosh they tried hard to make this true. The excessive and inconsistent use of click vs double-click in apps even then was confusing.

These days it's hard to find an app that can get by without Cmd+click which is harder than click on the right button, which would be even easier if the right button was physically distinguishable. Long-press is super annoying as well as force click--I never want the action that comes up when I accidentally force-click. With the prevalence of touch phones, the two-finger-tap might be the easiest of them to remember (if not as precise).


This was a large criticism I had of Force Touch ("3D Touch"), and why I was not at all surprised when it "died."

Force Touch may have been an impressive technological showpiece, but at its core it had no discoverability and you couldn't really even describe how to use it, new users just had to experiment.

This resulted in apps being unable to use it for important functionality, because it would never be found. Which ultimately resulted in most UI designs ignoring it entirely and leading to its own irrelevance.

A mouse's right click has the same issue, and many users never "discover" that either. I've met people who only know how to use the clipboard in Microsoft Office applications because it has a UI for it (via the Clipboard section in the ribbon). They simply don't right click, ever.


You reminded me of another iOS UI/UX fail: the long press (and Force Touch).

Apple famously for many years avoided adding a second mouse button. Steve Jobs was against it. Why? Because it lacked discoverability and consistency. He ultimately lost this because the context menu on Windows became pretty consistent and universal.

But his ultimatel point applies to long presses (and similar): where is the discoverability? Most people don't realize this can do something different. If they do, there's no consistency to it.

Even in your example, why would I expect to move the cursor around by long pressing the space bar? They added a non-intuitive gesture to make up for the failure they created by messing with the original copy and paste that worked just fine.

Both apple and Google created a similar problem with the address bar on browsers. Notice how hard it is to delect part of a URL? Why would you want to do this? Example: removing the stupid text fragment on Google search results so you can copy and paste the URL.

Click (or press) once and you select the whole URL. Click (or press) again and it brings up a menu with options like cut, copy and translate. To get rid of the selection you have to click outside the selection area. Now you can click the text like the search term but it snaps to word endings. What if I want to just correct spelling without re-typing the whole word?

All I want is to finely select in the URL and they've added so many extra steps and barriers to make this as difficult as possible in the name of "usability".


One finger more than the scroll gesture … I’m not sure how you could do it differently. Requiring three fingers on the mouse would just be weird (I’m all for allowing three fingers additionally to two fingers, though!) and requiring just two on the trackpad is kinda not possible (scrolling is obviously more important and scrolling with two fingers is obviously superior to every other implementation of that on a trackpad).

And it’s literally only the number of fingers that’s different. And the only non-obvious gesture is probably to swipe up to invoke Mission Control.

Also, the way OS X is set up, those gestures serve as shortcuts (like hot corners used to, in the olden days). Nearly every single functionality can be reached via just clicking (Notification Center, switching from desktop to desktop or fullscreen app to fullscreen app) or buttons on the keyboard (Mission Control, Launch Pad).

Just relatively less important and central functionality is relegated to either the context menu (the ability to look up words in a pop-over can be activated via the context menu or three finger tap) or keyboard shortcut and gesture (swiping down for App Exposé).

Accepting an arbitrary number of fingers … sure I’m all for it. But those gestures aren’t really anything worth criticising. They are shortcuts that have to be explicitly learned. That’s why there are videos in the control pane. Apple knows that those have to be actually taught. That’s why the UI has non-gesture alternatives that are obvious and in your face.

This is not to defend Apple – I just want to point out that little personal pet peeves like these aren’t necessarily indicative of actual problems actual users have, especially if they come from an expert. (I also think you can always improve things and that’s certainly also true of how OS X implements gestures. But that’s just a truism that will never be wrong. As someone working in the field: Luckily. There are probably juicier targets than gestures in OS X, though.)

next

Legal | privacy