Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> "How do we address this issue?"

The way people have already done so in touch software to date?

You program 'un-pinch to zoom' to zoom the desired elements allowing increasing levels of accuracy as needed. And in the cases that you need 'pixel perfect' accuracy [1] you simply include "bump" UI controls or expose explicit pixel coordinates that can themselves be altered to affect the desired movement of the layer or selection or what-have-you (something even keyboard/mouse UI usually offers).

Precision is a largely solved issue in touch software. The real problem that will keep mice around in a largely-touch-driven world, is the simple ergonomics of spending eight hours at a desk. (i.e. Gorilla-arm.) [2]

[1] 'Pixel perfect' is a concept that makes increasingly less sense as displays reach and exceed 300dpi. Pretty soon we'll all be dealing with vectors and things will be better for it. 'Pixel perfect' accuracy is of mere transitory usefulness until then.

[2] Barring the development of a drafting-table-style variant of the original surface and either some sort of flawless arm/palm/accidental-touch rejection or a switch from 'any' touch to 'explicit-object' touch.

e.g. the desk ignores all contacts except from a pre-ordained 'pen', 'thimble' or 'glove'.



view as:

I would love a drafting table size touch interface. I think there's still a lot of space left to explore to find a way to reject "resting" touches. Beyond sensor / algorithm based approaches which might look at surface area or pressures, one could also go with some pretty simplistic solutions. Something as dumb as a foot bar connected to a switch letting active touches through might be fine for at-desk operation.

Thinking on this more, kinect-style cameras looking down the plane of the desk could probably do 'posture' analysis to sort out (un)intentional touches pretty easily.

That might be easier than even trying to develop pressure sensors and heuristics.


I'm afraid this doesn't work in all cases. When working in a reduced physical area, irrespective of pixel count, zooming in and snapping to boundaries is counter productive. Audio wave editing for example is an operation on cyclic ( obviously ) information, and when zooming in as a means of rationalising location, important context is lost.

Imagine a time line with a periodic wave, interrupted only by a one or two cycle click. Zooming in to normalise the ratio of object to finger leads to very easily losing context. That is, relative positioning left or right is lost. So it becomes frustrating zooming in and out in order to get your bearings again. Even attempting this on a trackpad is quite difficult, when compared to high resolution mice.

There are many cases where it's much better to have a large display area, combined with a high resolution mapping to that area. I could edit waves on a postage stamp sized display with my finger if I put my mind to it. I don't think I would be as productive as on a tablet sized display though. In other cases I need to increase yet above that ratio. I'm afraid stubby fingers on compensating scaled objects is not adequate always.


It sounds to me like you're conflating "the trouble with touch" with "the trouble with too-small-screens" and deciding the problem is touch.

But I'm guessing you don't edit waves with a keyboard/mouse on a 3, 4 or 9.7" screen either. So maybe "touch" isn't the obstacle you're really battling in the situation described.

Also, haven't people long had solutions where a 'work area' is zoomed for precision selection/editing while one-or-more 'larger context' views are maintained (or operates on its own zoom level) in another chunk of the screen?

Do wave-editing tools not behave like that?


Well, keeping to this example, there are sometimes conflicting requirements. A transient with a long train decay, such as a crash cymbal or knock, explosion or gunfire for example requires a wide view to properly observe the full affect. The trailing decay can last for quite a while in this scenario. The optimum situation here is to include a segment before and afterwards, or perhaps even more than that, depending, as some modulation becomes clearer the more you zoom out, not in.

At the same time, operating on selected segments is more efficiently done with finely controlled hairline cursors, where an obscuring object like a finger doesn't contend, generally. After this of course, zooming and other means of fine control and selection come into play.

In scenarios like this it is very much a case of not being able to see the forest for the trees if making a representation too large.

There are solutions to the problem of precise location, which I think include touch and gestures, though not necessarily solely through touch. In practice I use the right hand for precise hairline location and the left to zoom in with gestures, zoom out again for context and then iterate.

I'm not arguing for mice over touch. I'm looking at precision. I always find it quicker to type on my Bluetooth keyboard than on my iPhones screen keyboard. The reason for that for me has just to do with the ratio between active elements. Keyboard keys are larger than my fingers, on screen keys are smaller.

I actually think that in some cases gestures in the z plane, as well as x and y would be a way of adding capability.

These opinions are based on having to give up the USB mouse in the field, using JTAG's and external drives in a two port only MBP. Using the trackpad leads to much longer work times, simply because it's a less precise device.


Let me start off by saying I was originally taking issue with the idea that touch precision is a problem. That it can't work in certain cases and that we'll always need mice. And all that in a complaint that demonstrated a pretty narrow understanding of what has already been done with touch interfaces.

It was never my intention to argue that touch is always the preferable interface for all workloads (something I tried to convey by pointing out how mice will remain relevant for quite some time, due entirely to day-long workloads).

As applies to your concerns, I was just trying to suggest that workable solutions exist, even if they'll always be less-than-ideal for larger quantities of work.

As to your specific concern, I still think a workable solution may be out there, even if it remains undoubtedly less efficient than a mouse and a larger screen.

e.g. Wouldn't the sorts of drag and off-axis drag controls that are used for seek in many podcast/audio-player apps [1] address precision-selection in cases where too-much-zoom presents problems, and also obviate the concern about fingers obscuring the wave itself?

[1] click to 'grab' the selection-marker/nubby on the wave/timeline, drag across the x axis to seek and then down on the y axis to control the speed of seek -- typically doing more and more fine-grained seek for a given x-axis drag length, as the finger gets further from the wave/timeline


I came close to the drafting table experience with a pen-driven old-style a4 tablet (not a screen) that i had mapped to my screen coordinates. Hovering the pen over the tablet moved the mouse, and tapping clicked. At the time i felt it was a far superior way of working than a mouse. (Had to stop using it when i moved to the mac and couldn't find a driver). But even on that you would get gorilla arm from constantly moving your arm across the tablet, despite being able to lean on an elbow. I suspect a drafting table ui would suffer from the same fatigue issues as a touchscreen interface.

Legal | privacy