An investigation into a way to provide a virtual reality experience that appears both visually sharp and quick has uncovered interesting findings, giving promise to the holy grail of non-queasy VR.
Thorsten Roth and Dr Yongmin Li of Ã÷ÐÇ°ËØÔ’s Department of Computing, together with Martin Weier and collaborators in Germany (at the Bonn-Rhein-Sieg University of Applied Sciences and Saarland University), carried out a user study of their novel image rendering technique. They found a sweet spot of image quality beyond which any additional detail wasn’t noticed as an improvement by participants – and in some cases seemed to make things worse.
Virtual reality can make us feel sick because of a lag – termed latency – between our eye movement and when the visual display changes. For computers and consoles, displaying very high-resolution graphics in an attempt to mimic reality can be a drain on resources and creates an even worse lag.
Making this latency smaller reduces nausea and allows video games and other experiences to feel more real.
Blurring over the details
Mr Roth and colleagues’ technique builds on foveated rendering, which takes advantage of one of the main limitations of the human eye. For people who have no physical or neural damage, the centre of the field of vision (the fovea) is the sharpest, with visual acuity reducing towards the field’s outer regions. This is why we turn our heads to follow a point of interest, rather than trying to use the periphery of our vision.
“We use a method where, in the VR image, detail reduces from the user’s point of regard to the visual periphery,” explained Mr Roth, “and then our algorithm – whose main contributor is Mr Weier – then incorporates a process called reprojection.
“This keeps a small proportion of the original pixels in the less detailed areas and uses a low-resolution version of the original image to ‘fill in’ the remaining areas.”
Foveated rendering as analysed by Mr Roth and colleagues: The centre of the image
is sharp, and detail reduces between the inner radius and outer radius.
Each study participant wore an Oculus Rift DK2 VR headset, adapted to include an eye tracker which closely mapped the movement of each eye. They inspected 96 VR videos, each 8 seconds long, with different combinations of subject matter, eye movement (fixed, steadily moving or free movement) and degree of foveated rendering – small, medium or large-sized areas of sharp detail in the centre of the field of vision, or having the whole of the field in sharp detail.
Perception misconception
After viewing each video, users were asked whether what they saw was free of visual artefacts: blurriness and flickering edges that are tell-tale signs of low-quality moving images.
Interestingly, the sweet spot for the foveated rendering was their medium-sized area: an inner radius of 10° and outer radius of 20° around the centre of vision. With any more detail in the periphery of the vision, participants didn’t find a noticeable improvement, and sometimes felt it instead resulted in a lower-quality image.
Mr Roth commented: “We showed that it’s not possible for users to make a reliable differentiation between our optimised rendering approach and full ray tracing, as long as the foveal region is at least medium-sized.”
The study also unearthed the presence of a visual tunnelling effect when users were following a moving target. The mental load of the task that has to be carried out means that visual artefacts are effectively filtered by human perception, making them largely imperceptible.
Summing up, Mr Roth said: “Our method can be used to generate visually pleasant VR results at high update rates. This paves the way to delivering a real-seeming VR experience while reducing the likelihood you’ll feel queasy.”
‘’, by Thorsten Roth, Martin Weier, André Hinkenjann, Yongmin Li and Philipp Slusallek, is published in the Journal of Eye Movement Research.
The foveated rendering technique analysed in this paper was originally described in a previous paper by the authors: Weier et al. (2016) , Computer Graphics Forum, 35(7).
(Top image: CC by )
Reported by:
Joe Buchanunn,
Media Relations
joe.buchanunn@brunel.ac.uk