Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

«Critically, it lacks the bandwidth of other forms of input.»

The most classic forms of input (keyboard and mouse) are touch manipulations of plastic artifacts: their bandwidth is essentially a subset of touch rather than any other way around.

With modern touch screens we are only beginning to scratch the surface of touch sensory bandwidth. Certainly there remains plenty of interesting areas to expand haptic feedback. There's also plenty of ways left to blend and hybridize touch sensitivity and gestures and even context sensitive/context reliant physical objects (like digital pens/pencils or even how our keyboards might interact with touch).

«It's for consumption, not creation.»

Many of the earliest forms of human creativity were manipulating things through touch. Maybe we haven't yet arrived at something like a good approach to digital sculpting or pottery, but that doesn't mean that we won't.

Even then, I've seen amazingly creative touch apps. There are some really cool touch apps in music creation, just off the top of my head. The fact that you associate touch with consumption (and "less cognitive development") may say more about you than the world of touch apps that already exist.

«It leaves the screen dirty.»

Yeah. So? Things get dirty when you use them. You find ways to clean them. Are you keeping your screen pristine and untouched/undirtied because it needs to be sconced in a museum some day?

I get that it's a personal preference and it plainly drives the Monk-esque sorts of OCD wild, but at the end of the day, entropy wins anyway.

«It encourages wasteful use of screen real estate by UI.»

One person's waste of space is another person's accessibility. Touch invites larger click targets to better accommodate the fatness of people's fingers. That accommodation, however, also helps people with accuracy issues with a mouse (which has always been a big deal easily ignored). Even further, it helps those situations that shouldn't demand accuracy in the first place. When I'm working on a spreadsheet at work, why does every click need to be a "headshot" to get my work accomplished? Lining up those shots takes time and energy I could be spending on the actual work. Just because mice can be pixel accurate doesn't mean they should be. As monitor resolutions increase and DPI increases and pixels shrink this only becomes crazier when an application has a small pixel-accurate hitbox. (It amazes me how many mission critical Enterprise apps you see with old school 16x16 pixel icons on toolbars running on modern hardware as if those businesses have a need to be FPS sniper schools.)

Fitt's Law suggests we should do better than that. Targets that are bigger and/or closer to the mouse pointer are easier to hit. If it takes touch to force more developers to be more mindful of Fitt's Law, then that alone is reason enough to support touch.



sort by: page size:

>Touch is magical.

Touch is at best a second-class input system. It's never going to have the kind of pointing precision we got used to twenty years ago with mice, and it's never going to be comparable for text-input with a dedicated keyboard. We use it on our phones because it's all we've got, and as terrible as touch screens on a device that small are (my Samsung phone has an effective touch resolution of about 4x7 finger tips, and any elements smaller than that I can't reliably interact with), the alternatives are worse.


We were discussing current touch technology, not future haptic what-ifs.

Bandwidth is information transferred over time. Whether you are discussing input (push/gesture max) or output (can't see the screen because hand is in the way), current-era touch is inferior to traditional input systems.

You failed to provide even one example of a creative use of touchscreens, except a vague music reference. I've seen those apps too, they are frustrating to work with and any serious musician would use a (musical) keyboard with real velocity support in preference.

I agree UIs need to move forward, I simply disagree that current-era touch is creative or high enough bandwidth to get us anywhere useful.


>A touch screen is unavoidable for very small devices like mobile phones, but whenever any alternative is possible it is better.

I have to disagree. I think as with any engineering challenge, it's about what tradeoffs work best for your specific application.

If non-touch screens were better for the tablet application (that is a tablet in general, not any particular app on the phone), we would have stuck with physical keyboards and blackberry's roller ball, or the classic palm styluses or some other new pointer device. Yet here we are.

Fruit Ninja was always possible with mouse or trackpads. And yet, it wasn't a thing until modern capacitive touch screens. Tellingly it has also only been ported to console applications where motion controls / VR is an option.

Even something like a Wacom tablet. They've existed for years as external devices with none of the "obscuring your view" problems that having a touch screen has, and high degree point to point precision. And yet as soon as the Cintiqs and other touch screen versions of their drawing tablets appeared, most artists I knew switched or started saving to switch right away.

And touch controls allow for the possibility of multiple simultaneous inputs, something that a track pad or stylus does not, so something like a software mixing board is clearly better experienced via a touch interface than a mouse and track pad.

And that's not to say that you aren't right that there are applications where a touch interface is worse.

For CAD/CAM where precise (and in many cases pixel perfect) movement is required? Something with a higher degree of precision than your average half inch meat stylus is going to be better.

First Person Shooter games don't seem to have really made the transition to touch devices (other than with emulated joystick controllers), despite the fact that tapping on your target is going to be faster target acquisition than using a mouse. There's just something that doesn't translate into touch.

A physical keyboard for typing input is leagues better than any touch screen keyboard I've tried.

If you're at your desk, a track pad in front of you is worlds better than getting gorilla arm trying to work all your desktop mousing activities with your 30 inch touch screen.

But I can't agree that one can make a universal statement that alternatives are always better if they are possible.


No, touch interfaces definitely have less bandwidth than a keyboard, and you can't make it all up with clever tech. Fingers are huge, relatively speaking, and for touch interfaces to make up for that fact they must get fewer bits out of the resulting input. The mathematics of signal processing require this. A stroke is much less definitive than a keystroke, or a mouse motion with click, etc. Moreover, you have the problem of interfering with output bandwidth as well as if you want to touch anything other than the very bottom of the screen you must obscure the screen with your hand while doing it for seconds at a time. You get less bandwidth both out and in.

What obscures this fact from your immediate recognition is that, unsurprisingly, touch interfaces are optimized to work under those circumstances, so you don't really "feel" the limitations, but you would if you really tried to push them the way a really good content creation application would.

For low-bandwidth applications they're fine, but they will never ever take over everywhere because they have weaknesses that make them unsuitable for high-precision, high-IO tasks.


> Turns out capacitive touchscreens were a great fit for cell phones

Yes but to be clear they are still an enormous compromise there. Maybe it is this generation of UX people, maybe it is fundamental to the technology, but there hasn't been much advancement in touch interface tech in years. Apple tried "deep touch" or whatever with feedback but then abandoned it because nobody (users or devs) wanted to deal with it. We just deal with all the downsides of touch screens because the rest of the device gives us such an incredible capability, even with the (sometimes literally) painful UX.


>Now if touch screens had a dynamic 3d surface that you could navigate blindly it would be something else.

It's not about touch nor physicality of a button or knob, but about a control that is not dependent on UI state


What's wrong with touch? Critically, it lacks the bandwidth of other forms of input. It's for consumption, not creation. It leaves the screen dirty. It encourages wasteful use of screen real estate by UI. You imply everything else is 'broken' because children like touch, however I believe that's just because touch mirrors physical interaction which we are all familiar with from birth, and thus 'better for the less cognitively developed'. You can't touch this.

Ah, I was going to join in with my own thoughts but this conversation seems to have gotten a little... personal...

For the record, I find touchscreen interfaces rather underwhelming. I've tried to love my iPad for creative purposes but I just find trackpad/mouse + keyboard faster and more versatile.

Of course - both pale in comparison to physical buttons. I dream of a touchscreen with really convincing haptics. That would be the best of both worlds.


The dominant control interface for any device has to be incredibly robust. It has to be useable in almost every conceivable situation, and for almost every conceivable purpose the device is intended to be used for. The edge cases it doesn't cover have to be covered by an even more robust, but perhaps less sophisticated alternate.

Keyboard and mouse allow very rapid and precise text input, rich option and function choice, and very precise selection and movement control. Touch pads do a very good job or replacing the mouse for mobile devices like laptops.

For a long time touch wasn't up to scratch. Pens allowed more precise selection and motion, but were always a kludge because the pens themselves were too easy to misplace or drop while on the go. Once touch's early imprecision was overcome, it took over because you always have your fingers with you. Note that there is one case where touch isn't enough - controlling volume settings for your phone while in your pocket. In this edge case, physical buttons take up the slack. My point is not that touch has limitations (it does), but that you need to take a very long, hard look at any technology intending to replace it to be sure it is even more robust, and even more convenient and precise and has even fewer limitations in a huge range of situations.

Motion controllers like TouchMotion are extremely limited compared to touch. You can't use it in a relaxed posture, you have to have your hands raised and posed in the space that will accept gesture input. For precise selection, you need to have a cursor on screen like a mouse, because you don't have the directness of touch. Also while it's not ideal to use touch on a train or bus that's moving, trying to use something like TouchMotion would be a joke.

Voice control is highly problematic too. People actually find it extremely hard to be precise, to the level that many interactions with computers require, in verbal communication. That goes double for describing visual or spacial information verbally. Anyone that's ever worked phone tech support for computer users knows what I'm talking about.

Eye tracking has possibilities, but our eyes wander around and shift focus point all the time. Sometimes we want to look at something other than the thing we're controlling. Also I suspect that maintaining the disciplined and precise eye movements you'd need to replace touch or mouse/trackpad would be pretty onerous.

So I don't see touch going away for a very long time, if ever. I remember in the 90s pundits predicting that keyboards and mice were just placeholders and they'd be gone within a few years. The truth is you'd better get used to them because they're here to stay, and so is touch.


It's sad that we can only think of touch screen interfaces as "dumbed down". The affordances of a touch-centric interface have yet to be fully explored; at least in principle, there's no reason why they could not be made just as capable as a keyboard+mouse.

> anything with an app is about 15,000 times more useable than the touch surfaces for most modern appliances.

As a person with pretty good sight (140% with glasses), I have to say that these sort of devices tend to be difficult to use even for me. For example, my father has an inductive stove with a touch interface, and I commonly fail to properly use it on the "subsequent first tries".

When I see a touch screen interface that is completely static (same "buttons" at the same positions), I often wonder what led to the decision to utilize them instead of just using hardware buttons/knobs. Is it a cost-saving measure, or just some managerial person who decided they needed to use something more modern?


> Touchscreens are pretty natural and intuitive

Try to watch someone in their 60s operate them and you will learn that the are anything but.

My dad find them endlessly frustrating, as his first instinct when talking about something in an image or something is to put his finger on it. But that invariably triggers something on such a screen.

He also struggles with things like Google Maps on a PC with a scroll mouse, as to him the natural action is to "pull" the map closer rather than "zoom in". End result is that he often find himself scrolling the wrong way...


Touch screens are made for consumption, rich interfaces (with tactile feedback, like keyboards, mice, etc.) are made for creation. Most people just consume on their digital devices. Therefore, touch screens are enough. It's as simple as that. People who want rich interfaces are in a minority but I don't think rich interfaces ever go away because someone has to create all the stuff for digital devices.

Touch input is perfect for consuming, horrible for creating.

You miss the point entirely: he acknowledges that touchscreens work. But he points out that they work by visual feedback only, and that this denies an enormous part of our tactile senses. In an ideal world, we would not have to choose between tactile and visual.

> The problem that touch interfaces solve, ever since the advent of the first smart phone, is that the interface is now dynamic. You can change it without having to replace the hardware.

I mean, the other point of a dynamic interface is that you can now have more controls than would fit on a static interface. Touchscreen fit-to-purpose controls might suck more than hardware fit-to-purpose controls, but either option is better than a single set of generic controls that control multiple systems that "should" have different control paradigms, translating to the generic controls being a compromised bad fit for any use-case.

E.g. a hardware English-language keyboard is probably better than a touchscreen English-language keyboard (though people with modern Blackberries might dispute this); but both are better than entering English text through T9 on a dial pad. And the touchscreen has the benefit of allowing you to have more keyboards (for e.g. the multiple native languages you type that use different alphabets), which wouldn't even fit on the phone as hardware keyboards.

I bring this up, because eventually you run out of space to stuff additional controls. As airplanes become ever-more advanced, their cockpits will approach that point. At that point, dynamic affordances may be necessary, just so you can have some kind of "pagination" allowing you to squeeze more controls in. (Hopefully it'd just be for the non-time-critical switches to flip.)


This was similar to my own reaction [0], that these concept videos don't look far enough forward.

And, maybe because I'm just a born contrarian, as the world moves toward touch-based direct-manipulation paradigms, I've personally been moving toward a more tactile, indirect paradigm. I recently bought a mechanical-switch keyboard, for example, that I'm growing more and more fond of every day. I've also started looking for a mouse that feels better in the hand, with a better weight, and better tactility to the button clicks.

The lack of tactility in touch screen keyboards has always been especially annoying to me. There's just so much information there between my fingers and the keys. I mean, there's an entire state -- the reassuring feeling of fingers resting on keys -- that's completely missing.

I accept the compromise in a phone, something that needs to fit in my pocket so I can carry it around all the time. But this makes me lament the rise of tablet computing. This is the sort of place that I refer to when I talk about tablets privileging consumption over production.

I don't think the problem is relegated to UI hardware, though. I think part of what's holding back a lot richer and more meaningful social interaction online is the fact that current social networking paradigms map better to data than to human psychology. It's the parallel problem of fitting the tool to the problem, but not the user.[1]

I'm not sure I agree with the direction he points to (if I understand him correctly). Making our digital tools act and feel more like real, physical objects is akin to 3D skeuomorphism. It's like making a device to drive nails that looks like a human fist, but bigger and harder. Better, I think, to figure out new ways to take advantage of the full potential of our senses and bodies to manipulate digital objects in ways that aren't possible with physical objects. And, please, Minority Report is not it.

[0]: http://news.ycombinator.com/item?id=3184216

[1]: More here: http://blog.byjoemoon.com/post/11670022371/intimacy-is-perfo... and here: http://blog.byjoemoon.com/post/12261287667/in-defense-of-the...


Imo, the idea that it would be great to type on a glass touch screen is Steve Jobs biggest and most long-lasting misstep.

Overall I think it's interesting and bizarre that both modern technology and visions of the future have totally sacrificed tactility. It seemed to be all about removing the real world: tactile interfaces are old fashioned, in the future everything works in a way that has minimal connection to reality, e.g. a Minority Report style UI, and so obviously is ethereal and cannot be touched. It makes me wonder why we had that ideal in the first place, and whether that ideal shaped technology or vice versa. Why did we fantasise about losing tactility?

Something I've also noticed is that we almost seem to be unable to imagine a programmable tactile interface, even in science fiction. I guess humans wanted "something extra/futuristic/other-worldly", and that meant having things be unlike anything else in the world, which as the author points out, means something without tactility.


> "How do we address this issue?"

The way people have already done so in touch software to date?

You program 'un-pinch to zoom' to zoom the desired elements allowing increasing levels of accuracy as needed. And in the cases that you need 'pixel perfect' accuracy [1] you simply include "bump" UI controls or expose explicit pixel coordinates that can themselves be altered to affect the desired movement of the layer or selection or what-have-you (something even keyboard/mouse UI usually offers).

Precision is a largely solved issue in touch software. The real problem that will keep mice around in a largely-touch-driven world, is the simple ergonomics of spending eight hours at a desk. (i.e. Gorilla-arm.) [2]

[1] 'Pixel perfect' is a concept that makes increasingly less sense as displays reach and exceed 300dpi. Pretty soon we'll all be dealing with vectors and things will be better for it. 'Pixel perfect' accuracy is of mere transitory usefulness until then.

[2] Barring the development of a drafting-table-style variant of the original surface and either some sort of flawless arm/palm/accidental-touch rejection or a switch from 'any' touch to 'explicit-object' touch.

e.g. the desk ignores all contacts except from a pre-ordained 'pen', 'thimble' or 'glove'.

next

Legal | privacy