Oculus Rift Developer Kit 2 and Latency Mitigation techniques

Post on 15-Jan-2015

36.082 views 1 download

Tags:

description

This was a presentation I gave at HPG 2014 in Lyon, France.

Transcript of Oculus Rift Developer Kit 2 and Latency Mitigation techniques

DK2 and Latency Mitigation

Cass EverittOculus VR

Being There

• Conventional 3D graphics is cinematic– Shows you something• On a display, in your environment

• VR graphics is immersive– Takes you somewhere• Controls everything you see, defines your environment

• Very different constraints and challenges

Realism and Presence

• Being there is largely about sensor fusion– Your brain’s sensor fusion– Trained by reality– Can’t violate too many hard-wired expectations

• Realism may be a non-goal– Not required for presence– Expensive– Uncanny valley

Oculus Rift DK2

• 90°-110° FOV• 1080p OLED screen– 960x1080 per eye

• 75 Hz refresh• Low persistence• 1 kHz IMU• Positional tracking

Low Persistence

• Stable image as you turn - no motion blur• Rolling shutter– Right-to-left– 3ms band of light– Eyes offset temporally

Positional Tracking

• External camera, pointed at user

• 80° x 64° FOV

• ~2.5m range• ~0.05mm @ 1.5m

• ~19ms latency– Only 2ms of that is vision processing

Position Tracking

+ =

technology magic

The good news: You don’t need to know.

Image Synthesis

• Conventional planar projection– GPUs like this because • Straight edges remain straight• Planes remain planar after projection

• Synthesis takes “a while”– So we predict the position / orientation– A long range prediction: ~10-30ms out

Note on Sample Distribution

• Conventional planar projection, not great for very wide FOV– Big angle between samples at center of view

Alternative Sample Distributions

• Direct render to cube map may be appealing• Tiled renderers could do piecewise linear – Brute force will do in the interim– But not much FOV room left at 100°

Optical Distortion

Distortion Correction

Optical Distortion

• HMD optics cause different sample distribution – and chromatic aberration

• Requires a resampling pass– Synthesis distribution -> delivery distribution– Barrel distortion to counteract lens’s distortion

• Could be built in to a “smarter” display engine– Handled in software today• Requires either CPU, separate GPU, or shared GPU

Display Engine (detour)

• In modern GPUs, the 3D synthesis engine builds buffers to be displayed

• A separate engine drives the HDMI / DP / DVI output signal using that buffer

• This engine just reads rows of the image• More on this later…

Time Warp

• Optical resampling provides an opportunity– Synthesized samples have known location

• Global shutter, so constant time

– Actual eye orientation will differ• Long range prediction had error• Better prediction just before resampling• Both predictions are for the same target time

• So resample for optics and prediction error simultaneously!

• Note: This just corrects the view of an “old” snapshot of the world

Time Warp + Rolling Shutter

• Rolling shutter adds time variability– But we know time derivative of orientation

• Can correct for that as well– Tends to compress sampling when turning right– And stretch out sampling when turning left

Asynchronous Time Warp

• So far, we have been talking about 1 synthesized image per eye per display period– @75 Hz, that’s 150 Hz for image synthesis– Many apps cannot achieve these rates

• Especially with wide-FOV rendering

• Display needs to be asynchronous to synthesis– Just like in conventional pipeline– Needs to be isochronous – racing the beam– Direct hardware support for this would be straightforward

Asynchronous Time Warp

• Slower synthesis requires wider FOV– Will resample the same image multiple times

• Stuttering can be a concern– When display and synthesis frequencies “beat”– Ultra-high display frequency may help this– Tolerable synthesis rate still TBD

• End effect is, your eyes see the best information we have– Regardless of synthesis rate

Questions?

• cass.everitt@oculusvr.com

• For vision questions:– dov.katz@oculusvr.com