Being able to cram this amount in such a small space also helps improve a 'normal' density display as processes improve and fever dead pixels allow for larger displays.
The primary issue with large displays is the failure rates during manufacturing.
OLED can already 1000 +PPi, that is good enough for most VR. The Vive Pro is 615 PPI, and that is pretty damn good, if you do 4K on that it is still only 825 ppi. ( We likely run into Hardware Rendering problem with high framrate and high res before we hitting OLED limit )
>What improvements are needed to increase OLED sales?
Too expensive. Along with colour accuracy over time, burn in etc. All of them are not really solved ( yet )
Articles I just looked at say ~400ppi is "retina" density at 12" away. If these are for VR or viewfinders 3" from your eye, what PPI is needed? Still looking...
Actually, that's a pretty interesting article, posting a bit more:
But in one respect VR still lags far behind our biological limits: we are nowhere near a pixel density that mimics real life.
In theory, in our fovea, we need about 120 pixels per degree of view to match reality (although Meese [Tim Meese, professor of vision science at Aston University] says in practice, people generally can't see in finer detail than around 80 pixels per degree). Currently, the best headsets manage about 10 pixels per degree horizontally. Given the need to scale up by a factor of about 10 – on both axes – the increase in resolution required is enormous. "I don't think the technology is there yet for those display pixel densities," says Bryan William Jones, a retinal neuroscientist at the University of Utah.
And for VR obsessives who will settle for nothing less than a perfect replica of reality, even 120 pixels per degree might not be enough. Put two lines above each other and move one slightly to the left or right, and it turns out we are "extraordinarily sensitive" to even the tiniest differences between them, says Meese, even to movements smaller than the width of a cone in the eye.
To match this sensitivity on a computer monitor would require a pixel density that "beggars belief", he says, and well beyond 120 pixels per degree. The US Air Force has estimated that a computer screen would require 10,300 pixels per inch to simulate these so called "hyper acuities". This is more than 30 times that of an iPhone 7 (and 12 times the density boasted by a new Samsung VR display, revealed in June, which has a display of around 850 pixels per inch).
Such fine-grain vision is rarely used in the real world, says Meese. "The closest example you can think of is threading the eye of a needle, or something like that," he thinks. But it serves as a reminder that imitating the look of real life in VR remains a technological pipe dream – don't expect to do any VR sewing in the next decade or two.
This is talking about right at the fovea though. Thats a pretty small space.
I'm thinking we will see VR sewing in the next decade, because we'll find shortcuts to making it work. Sure hyper-acuity is hard, but its virtual reality, as long as you can make it look real it may as well be.
What I mean by that is - perhaps we could use tricks like stacking high-density screens which capture different portions of the pixels. Say you've got 3 high res screens somewhere in the headset, and somehow they are routing to the eye. With optics a high resolution screen can be made even higher resolution but smaller. It still is a very difficult problem to solve for sure, but I think if there are multiple difficult problems to choose from, chances are we'll find ways to partially solve some, combine that, and have something like a solution as a result.
With 3 screens you could have full scene for one, focused region on another, and fovea on another, and then optically stack that...somehow.
reply