Gaze redirection looks bad partly because head orientation ends up decoupling from eye direction.
With a depth camera you can move the camera's entire vantage point as if it were behind the right area on screen. Hat's probably a better approach, along with AI filling in the disocclusions (more cameras can also do this).
You can deliver a rendered avatar based on facial tracking to remove the "camera in the wrong place" eye contact problems, even with normal screens and cameras.
VR makes things immersive, but it is neither necessary nor sufficient to solve the eye contact problem. It's orthogonal.
Gaze-tracking is also going to change depth-of-field and HDR effects. Right now both of them generally suck, because the computer doesn't really know where you're glancing on the screen.
Without even considering positional head tracking, just the act of rotating your head moves the location of your eyeballs in space (our eyes are not the center of rotation of our head, our neck is).
So I'm also doubtful that this will end up looking any better than a simple 3d QuickTime VR sphere.
Interesting that gaze-to-focus is panned and not because it doesn't track well. I've worked in VR games for a while and gaze tracking excites designers but I find it a very annoying input mechanism and we usually don't ship it. Maybe it's still the right call for the AVP but being able to interact with things you aren't looking at is important.
I wouldn't call it a step back. That sounds rather pessimistic.
It looks like an experiment, like many great things that come out of Microsoft's research.
The Computer Vision field has certainly gotten far with head-tracking and object recognition.
With a bunch of relatively simple code, you can have a piece of software that tracks your jaw and blinking gestures with a cheap webcam and run whatever macros you choose.
I'm not sure about eyeball following, it depends on the camera's resolution and how much you train the program.
I have myself experimented with head-tracking to replace mouse-look in FPS games. It is very fun to try, save for an eventual headache in Quake 3's Q3DM17 :)
Now, these kinds of experiments are not for the average consumer, but they have been simplified so much over the last few years that the amount of excuses for not trying it out is decreasing.
So they talk about 4 ways to use eye tracking,
1) Use gaze to control the camera pivot -- That sounds incredibly annoying. Like I wouldn't be able to just look around a screen without it constantly moving. Though if a system is built with this use case in mind it could be cool. Imagine having a whole 360* view on screen but with it kind of squished around the edges, when you look out to an area on the edge it could immediately foveate the display to that area.
2) Eye contact - Could be really cool. Monsters that can only move when you look away (or blink). Or they disappear if you look directly at them. The NPC's stop talking or get upset if they think that you're not paying attention.
3) natural targeting - I'd like to try this but I'm skeptical. What If I want to look where I'm running for a moment? Then my gun careens across the screen.
4) Immersive Graphics and Sound - This is the most interesting one and it has the worst name. This application dynamically focuses and blurs the depth of field of an image as you look around. One of the big problems with 3D immersive tech right now (Real3D, oculus) is that you can't capture this element of natural vision. They usually default to rendering everything as extremely sharp.
I'm surprised they didn't take about typing with your eyes since it's something that eye-tracking is currently used for.
> The chief problem is that without eye tracking, it's incredibly UNFUN to use your head for movements that your eyes could otherwise have done for you. Mind you when the HMDs were much heavier back then it sucked a lot more, but it's still pretty shitty not being able to glance aside.
Isn't that more of a problem of field of view? With a wide enough field of view, you could just look at whatever you wanted to. Basically make it like real life, where what you look at is the combination of where your head is pointed, which the computer needs to use to update the screen, and where your eye is looking, which the computer doesn't need to care about. Or are you thinking of using eye tracking to something else, like determining what you're pointing at like a mouse? Yet another option you have in VR is what you're targeting, like how Dactyl Nightmare uses the gun you're holding to map directly into the virtual world's gun. Using the head position to AIM rather than just LOOK seems like a bad way of doing things.
I find eye-tracking interface to be very clunky and constraining. You have to restrict where you are pointing your eyes, or you will select things by accident and then you'll miss what's going on in the world. I think any prolonged session will give people eye-strain. 3D goggles already do. With eye-tracking doubly so.
So, this article. Read it a few days ago, and while I do like some concepts (like the below mentioned monsters that move when you're looking away thing), a lot just seems like it'd be more annoying than anything else.
Camera controlled by where you look? Oh, that's good, except quite a lot of the time in games you don't want to have the camera follow your gaze your screen. For example, in a lot of platformers, you want to be able to see as much of the area as possible in order to judge where you're jumping too. If the camera tracks your eye movements... well, someone's probably falling into the abyss pretty soon.
I don't think anyone would want to be trying Grandmaster Galaxy or Champion Road with this sort of tech any time soon.
There's also the way a lot of games have 'lock on' mechanics for targeting enemies. Your attacks go one way (towards the enemy), but the camera focuses somewhere else to give you a better view of your surroundings. I can see that falling apart pretty quickly if eye tracking got brought into it.
But even assuming it'll mostly be used for games with a first person viewpoint (which in itself might be a limiting factor for things like virtual reality, since it's a smaller percentage of games than people like to think), it just seems like it'd be a bit too... precise for a lot of titles. People playing games look all over the screen for things that might be important in the future, and they probably don't want the camera bouncing around like a pinball as a result.
The focus and depth of field aspects seem the only plausible ones for most games.
When some of those 45 perspectives don't go near any watching eyes, do they still need to be rendered? If not, one might use head/eye tracking to save computes.
The eye tracking part was pretty radical. I mean, it's starting to get into VR headsets but just for focusing the processing power to an area. Not specific UI interactions like this person posed. That seemed pretty novel to me.
With a depth camera you can move the camera's entire vantage point as if it were behind the right area on screen. Hat's probably a better approach, along with AI filling in the disocclusions (more cameras can also do this).
reply