How is it lately? I have the original one from the Kickstarter. I played with it a bit because I had some crazy ideas about recognizing ASL, but I found that, while it was super great at instantly recognizing orientation and open vs closed hands, it utterly failed to capture complex handshapes ("this hand is making a fist except the thumb is poking out a bit between the middle and ring finger").
Sure! With V2 we saw a more robust and granular hand model, with every finger and joint in the hand being identified. Significantly better performance against ambient light. And huge improvements to how the software handled finger occlusion.
A lot of these improvements were diminished when we went to VR -- new angles, complex backgrounds, different ambient lighting conditions. So we made the Orion software, which was an even bigger step up:
- lower latency
- longer range
- better and faster hand recognition
- vastly improved robustness to cluttered backgrounds and ambient light
After playing with it for a bit, the two finger swipe and click feels pretty intuitive.
But the 'rock on' thumb swipe still feels a little uncomfortable. Especially when I get used to gesturing with my right hand, and then I have to carry my arm over my body to get to the Mute/UnMute.
Have you considered just pulling in some American sign language? The controls (mute, tile, etc.) could even have a depection of the hand symbol.
The hand tracking is getting pretty good if you look at a Hololens or some of the recent Quest 2 work. Trying to type with it is trash but you get much more fidelity when trying to manipulate something in 3D space than any other input system.
The UX primitives are still getting worked out. Question like "What is it like to scroll in 3D AR?", "Whats the best way to input text", "What gestures can be reliably tracked from a headset?" and many others are still open questions.
It will be interesting to see what shakes out. Maybe we'll end up with a lot of inter-connectivity with phones and PCs for better inputs.
yep. it's totally a surface electromyography device that feeds into a neural network to recognize the gesture you are making from your wrist. There are alternatives of course, such as gesture recognition from a camera. This is interesting regardless, but wonder how useful it is. Gesture recognition is not new , it's just has not surpassed other input devices yet.
Thanks for the heads up on Orion. I bought a Leap Motion at it's initial release, and like some others here was disappointed in it's ability to accurately track my fingers/hands. I just might pull it off the shelf and give it another shot now.
What can one say? I also tried it (have always been fan of new interaction methods) and was severely disappointed. Even the "put reflective stickers on your fingers and use an off-the-shelf wiimote" had better tracking, though it is obviously less convenient.
That said, I am waiting for v2 from these guys and/or anyone else.
The tech is amazing, can't believe how fast it's progressing but as someone that is missing a hand I'd never use one I don't think. I can see how they are incredibly useful for people that have more difficulties.
Any one know how the finger gesture control works? I wonder how accurate it would be. That UI seems to be a big advancement compare to other VR devices.
Any one know how the finger gesture control works? I wonder how accurate it would be. That UI seems to be a big advancement compare to other VR devices.
This makes me realize that the gestures it is being demonstrated with are going to be hilarious if they are misidentified. I'm /assuming/ that the thing will not pick up gestures between you and someone else. Such that if I pick up a paper with a similar hand shape as their "pinch and move things" gesture, it will realize it wasn't the same.
Or will this be akin to how siri does a shit job understanding anything that is not mechanical in speech? Will be absolutely hilarious if it has a hard time recognizing non-light skinned hands for the gestures. Really hope they don't make that mistake.
I wonder if this could be used to allow ASL input to a computer - automatic transcription of conversation and such? I think it really depends on how accurately it can sense actual finger position rather than just motion in that muscle
How does this compare with what Google[x] has been developing in the RF gesture recognition? In the videos they are using a Leap Motion while G is "suggesting" using your fingers as support.
The Leap Motion still does hand tracking better than MediaPipe, and it's still the best hand tracker I know of (besides larger devices like the Oculus Quest).
We've got an open source library for mobile hand-object input [https://portalble.cs.brown.edu/], and the version with Leap Motion is really nice, but doesn't directly work with a phone (we had to pipe data through a compute stick to make it work).
I'd love to see MediaPipe Hands match LeapMotion precision some day, but I'm not even sure if it's possible. A real depth sensor goes a long way.
reply