Virtual Reality for Dummies

Started by
20 comments, last by arnero 7 years ago

Out of curiosity (for now), what does it take to get your 3D-game into VR? I'm not planning to actually do it anywhere soon, but I'd just like to know briefly:

* What kind of 3D(buffer)data does a VR kit need (minimally) to start rolling?

I would think it needs a depth-buffer to begin with. Is that just your ordinary depth buffer most apps have for various techniques, or does it needs to be spiced up with additional data? Or maybe something entirely different?

* Can you turn any 3D program into VR? Or does it really require specific techniques from ground up?

* So I have a 3D (first person) game made in OpenGL (4.5), can that eventually be transferred to OpenGL?

* There are couple of kits out there now (Occulus, PS4 VR, ...). I guess they come with their own SDK's. do they roughly work the same (can I swap easily between them), or are these SDK's big and hard to master?

* Controls (with your head) - I guess that is really bound to whatever SDK the VR kit brings with it, right? Or is this also standardized, straight forward stuff?

* For artists, is there anything that needs to be changed or tweaked in their workflow (creating 3D props, textures, normalMaps, ...)?

* For design / game-rules, would you need to alter things like motion-speed, the size of your rooms? Or can it be mapped pretty much one-on-one from a non-VR setting?

* Audio - anything that needs adjustments here, asides from beging 3D/stereo as good as possible?

* Performance. Being close to your eyes, would you need larger resolutions? Anything else that slows-down the performance?

* Your personal experience - easy, hard? Maybe easy to set-up, but hard to make it working *good*?

And excuse me if there were stupid questions. The only VR experience I have, was watching Jurassic Park pictures with those green/red glasses back in the nineties!

Advertisement

I can only think of two special features:
* fish eye lense: Higher resolution / more details / better AA at the center. MipMap FrameBuffer, where the high res buffer only covers the central region

* Motion thickness: readOut head orientation, do a last render pass to rotate view, free sync it out. It would even be desirable to incorporate the rotation as a scanline rendering algo into the buffer readOut circuit. You know, orientation data goes from VR set to AdressInto FrameBuffer generator, right pixel is fetched and values pushed to LCD. Did I write LCD? LCD is slow. OLED? 120 FPS!

So basically the graphics card has to support VR. Nothing a game programmer can do or not do.

>> Nothing a game programmer can do or not do

That almost sounds too good to be true :D

I can understand the fisheye. But why lowering quality at the outer regions? Is that to emulate "out of focus" - but wouldn't your eyes already be doing that - I mean your own real eyes? Or is it more just to avoid overkilling your head with too much visuals all around?

I have no VR experience beyond reading some things.

"Special" techniques I've heard of are foveated rendering, and asynchronous time warp. (temporal reprojection)

Also I remember when Nvidia's pascal was announced they had a feature called "Simultaneous Multi-Projection" which reuses geometry between the projections for the left and right eye as well as more... but I can't describe the more portion beyond saying it uses multiple projections per eye... google it.

edit - here SMP http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/11

* Performance. Being close to your eyes, would you need larger resolutions? Anything else that slows-down the performance?

I've read that with the current resolution of VR (slightly greater than 1080p) you can make out individual pixels. But you need a minimum of 90fps with no dips from what I've heard. Otherwise when you move your head fast there is a severe lag that IIRC (again from what I read) causes nausea/dizzness.

-potential energy is easily made kinetic-

From 3D graphics and shaders to anti-vomit code. I can imagine a poor performance, crazy lighting effects, or certain environment settings are sickening indeed.

But seriously, from what I read the biggest challenge is to keep up speed then, which is quite hard when looking at my own program that barely reaches 60 FPS on a 1600 x 900 resolution. Certainly with big particles flying around, things can get smudgy. And I guess even some AAA engines/titles suffer the same.

Gazing through the SMP article you posted, it seems you have to render everything twice (with a small offset like your own eyes have). But techniques like "Simultanous Multi Projection" saves you from having to push geometry twice, and the fishbowl approach avoids having to render everything twice. But are those steps automagically done for you by the videocard, or do you still have to teach your GPU a lesson?

I've recently gone through the experience of porting a console/PC engine to VR, so I think I can answer most of your questions. However I only have real hands-on experience with the Rift, so I'm not an expert on the Vive or PSVR.

* What kind of 3D(buffer)data does a VR kit need (minimally) to start rolling?

The current headsets only need stereo LDR color buffers. There's been some dicussion of using depth and/or velocity to assist with reprojection, but currently none of them use that. So basically you'll give the SDK two separate textures (one for the left eye and one for the right eye), or a combined 2x texture here the left half is the left eye and the right half is the right eye. We give them the latter. Really you can think of it as another swap chain, except this swap chain goes to the headset and not to a window. The number of pixels you end giving them is about twice the number in a 1920x1080 buffer, which is higher than the display resolution due to the fisheye warping.

* Can you turn any 3D program into VR? Or does it really require specific techniques from ground up?

From a purely engine/graphics point of view, I would say the answer is "mostly yes". There's a few things that don't work as well in VR (normal mapping is a bit more obvious, as are any tricks that don't give the eyes proper parallax), but for the most part you'll be fine with standard 3D graphics techniques. The biggest considerations are performance, and gameplay/locomotion. Maintaining a consistent 90Hz is not easy, especially on PC. For gameplay you really want to think about how you an make a compelling experience in VR that plays to its strengths, and avoids motion sickness. I'll tell you right now that if you port a FPS to VR that uses analog sticks for movement and rotation, you're going to have some very nauseous players. There are some players that can handle more extreme situations without discomfort, but I personally am of the belief that as VR developers we have a responsibility to make comfort a top priority. It's already a small, niche market, and we're never going to expand past that if we make the average user want to throw up when they play our games.

* So I have a 3D (first person) game made in OpenGL (4.5), can that eventually be transferred to OpenGL?

Oculus supports OpenGL, DX11 and DX12. OpenVR (Rift) supports DX9, DX11, DX12, and OpenGL.

* There are couple of kits out there now (Occulus, PS4 VR, ...). I guess they come with their own SDK's. do they roughly work the same (can I swap easily between them), or are these SDK's big and hard to master?

So Oculus has their own SDK, which is pretty straightforward and easy to use. It actually doesn't have a very big surface area for the core functionality: just some functions for making a swap chain, presenting it, and functions for accessing the current pose and acceleration data from the headset and touch controllers. There's also a separate platform SDK for integrating into their store and multiplayer systems, which is similar to Steamworks. Valve has OpenVR, which has a bit more abstractions compared to the Oculus SDK, but it actually lets you target both the Rift and the Vive. PSVR is a bit different due to being on a console, but that's under NDA so I don't want to get into specifics here.

* Controls (with your head) - I guess that is really bound to whatever SDK the VR kit brings with it, right? Or is this also standardized, straight forward stuff?

This is pretty easy to work with. Both the Oculus SDK and OpenVR will give you a "pose" that represents the current position and orientation of the headset, which is calculated using a combination of sensors in the headset and external tracking cameras. One thing you have to watch out for is keeping your coordinate spaces straight, since you will have to transform from "real world" coordinate space (usually relative to the user's initial pose when they started the game) to your game's world coordinate space. The other wrinkle has to do with the way the headsets compensate for latency. Usually the basic flow of a VR app will go query pose->render the world using a camera locked to the headset post->present to final rendered images to the compositor->image shows up on the headset screen. The problem is that there can be 10's of milliseconds between the first and last step, which can cause a noticeable "laggy" feeling to the user. To compensate for this, all of the current VR headsets have their compositor estimate the pose at the time of display (using the current angular velocity and acceleration), and apply a warping function to your image that essentially rotates the pixels so that they appear to have less latency. What this means for you as a developer is that you want to minimize the time between pose query and presenting, since that means the compositor won't have to warp your image as much. So ideally you want to grab the pose right before your issue your rendering commands.

* For artists, is there anything that needs to be changed or tweaked in their workflow (creating 3D props, textures, normalMaps, ...)?

I already mentioned normal maps, which may not be as effective at fooling users in VR. In general you also want to try to avoid noisy textures with lots of high-frequency detail. Seeing lots of small details move across the screen tends to make users uncomfortable, particularly if it's in the periphery. So you want to stick to flatter, less-noisy textures if you can. As for props, if you can get some props with physics on them that the user can manipulate, that's always fun. We call them "toys" in our game, and put as many the world as we can.

The other big concern is UI. Standard screen-space 2D UI doesn't work very well for VR, and you can't just map it to the headset's screen. It doesn't feel nice, and you only have a very small area of the screen where text is readable. So at the very least you need to put the UI on a plane that the user can look at by turning their head, but ideally you want to find a better way to integrate into your game world. We ended up writing new UI tech from scratch for our game that would let us efficiently populate the world with UI.

* For design / game-rules, would you need to alter things like motion-speed, the size of your rooms? Or can it be mapped pretty much one-on-one from a non-VR setting?

Possibly, depending on your game. You definitely want to try to avoid any situations where the user travels very quickly through smaller areas, since that can give them motion sickness. The fast-moving pixels in the periphery can also cause discomfort. Otherwise, as long as your levels work for normal human scale they should be fine.

* Audio - anything that needs adjustments here, asides from beging 3D/stereo as good as possible?

Positional 3D audio is nice if you can do it. Oculus has plug-ins for the big engines and for popular audio middleware (such as wwise) that will do it for you.

* Performance. Being close to your eyes, would you need larger resolutions? Anything else that slows-down the performance?

The SDK's will tell let you query for the ideal resolution. This resolution is larger than the actually display resolution, since it will picked such that there's close to 1:1 pixel density in the center of the image after applying fish-eye warping. As I said earlier, the Rift will request a resolution that's roughly twice the size of 1920x1080, so you're dealing with a lot of pixels in not a lot of frame time.

One of the popular techniques for reducing pixel load is what Nvidia refers to as multi-res shading. This technique lets you use higher resolution towards the center of the screen and lower resolution towards the edges, which better matches the pixel distribution after fisheye distortion. Nvidia advocates doing it using their proprietary hardware extensions, but you can also do it in a way that works on any GPU: see slide 21 of this presentation, and slide 29 of this presentation.

Another performance concern is from having to draw everything twice for stereo rendering. Usually the simplest way to get VR working is to wrap your render function in a for loop that iterates once per-eye, but this very inefficient. At the very least you need to pull out things that aren't view-dependent, like rendering shadows maps. Ideally you want to set things up so that you don't have the loop at all, since making two passes through your render loop means doubling all of the sync points on the GPU. It's faster to do "draw meshes to both eyes for render target A->sync->draw meshes to both eyes for render target B" then it is to do "draw meshes for left eye for render target A->sync->draw meshes for left eye to render target B->sync->draw meshes for right eye to render target B->sync->draw meshes for right eye to render target B". You can also potentially cut your draw calls in half by doing both eyes simultaneously. We use instancing and clip planes to do this, but you can also use Nvidia's viewport multicast extension to do it more efficiently.

* Your personal experience - easy, hard? Maybe easy to set-up, but hard to make it working *good*?

Definitely not easy to make a good experience. It wasn't very hard just to get our engine up and running on VR (we had 1 programmer spend 2 weeks or so doing this), but it's taken a lot of effort make our engine more efficient at rendering VR. On top of that there was tons of time spent designing and iterating to figure out what would make for a compelling VR experience.

By the way, Oculus has a "Best Practices" doc that you should read through. It should give you a better idea of what's involved in making a VR app, although some of it is a little bit outdated (for instance, they no longer have the option of letting the app do its own distortion, the compositor always does it for you).

@[member='MJP'], Does the nausea have more to do with the latency or framerate or both?

-potential energy is easily made kinetic-

@[member='MJP'], Does the nausea have more to do with the latency or framerate or both?

He mentioned using the sticks for looking/movement -- even with amazing framerate/latency, this feels really weird. If the VR camera rotates, but your head doesn't rotate, your brain gets upset. Your inner ear is telling your brain that you didn't move your head, but your eyes are telling it that you did move your head, so your brain decides that you've eaten poison and starts trying to get it out of your stomach...

FWIW though, everyone decided that using a stick to move was a really bad idea early on, hence all the VR games with teleportation now... but I've played a lot of Onward, which uses a movement stick (plus room scale movement) and I find it to be completely fine.

So personally, I don't mind having a thumbstick for movement (as long as the movement speed is quite slow), but using a thumbstick to look around is still a terrible idea.

The latter point can cause problems for the Oculus rift, because out of the box it is a "front-facing VR" experience. It's not designed for the player to be facing backwards (without forking out extra $$$ for a 3rd tracking camera)... so a well designed Oculus VR game should use gameplay that doesn't require the player to turn around... Some games are working around this by having a "turn 180º button", which fades to black, rotates you, and then fades back in. This is a little less disorienting than giving you a turn thumbstick.

A side note to add to the above -- the SteamVR/OpenVR SDK isn't technically tied to the Vive. SteamVR is meant to be an open software platform that works with all hardware - currently they support HTC Vive, Oculus Rift and OSVR.

Oculus also has their own SDK that you can use instead of OpenVR, which is tied to the Oculus hardware.

MJP covered most everything nicely, just thought I'd mention:

For distant objects the parallax/view warp can be pretty nonexistent. If you've got the time something nice can be to do 1 eye with a full depth/depth buffer etc. and the other only render nearby objects/stuff close enough to have visible parallax. You can then reproject all the pixels>distance from one eye to the other and avoid drawcalls/models/etc.

Also positional audio is very nice, and so is a lot more audio detail to go with it. In VR it's definitely more noticeable when say, you drop a physics object and it does't make a sound, than when it happens in a normal game.

I'll tell you right now that if you port a FPS to VR that uses analog sticks for movement and rotation, you're going to have some very nauseous players. There are some players that can handle more extreme situations without discomfort, but I personally am of the belief that as VR developers we have a responsibility to make comfort a top priority. It's already a small, niche market, and we're never going to expand past that if we make the average user want to throw up when they play our games.

FWIW though, everyone decided that using a stick to move was a really bad idea early on, hence all the VR games with teleportation now... but I've played a lot of Onward, which uses a movement stick (plus room scale movement) and I find it to be completely fine. So personally, I don't mind having a thumbstick for movement (as long as the movement speed is quite slow), but using a thumbstick to look around is still a terrible idea.


How is it on PC?

I don't have any VR experience, but i don't want to play a game with teleporting or worse if i don't have accurate control of viewing direction.
The FPS way of moving and looking around is the most important progress in games we ever had - VR is not worth to give it up.
I'm not going to pay a lot of money just to play rail shooters or something, and i guess i'm not alone and the same is true for a lot of core gamers.
Not average gamers, but maybe those who would be more willing to invest in VR and accept initial motion sickness for better games.

Personally i think VR is going to fail because of this:
Fancy Controllers (too expensive, needs exclusive games - just start with the headset and wait until there is a market for those things)
Comfortable games (like going back to the age of interactive movies on CD-Rom? Where are the 'real' games?)
Too high resolutions (too expensive and limiting - try a optical low pass filter to hide pixels)



So what i suggest for PC games is simply:

Don't change controls. Keep mouse look and just add the additional head rotation to that. Use a gamepad or keyboard for motion.

Don't create VR only games, instead make more games compatible with VR. Let the users decide if they like it or get sick the first hour but feel better next day.
(I played Trackmania every day for years, stopped for some months, got back and there was extreme motion sickness just from the regular monitor to both me and my girl, after some days we got used to it again, no more sickness)

Don't try to invent a new genre just for VR - this will happen automatically with time. Instead and again make current genres work with VR.

No room tracking please - i don't want to stumble over my 3 year old, fall out of the window, no. Just wanna sit lazy in my chair as always and see virtual things in 3D.



Maybe i'd change my mind if i had real VR experience (i did render 2 views side by side on screen and pinched my eyes - it works! :) ),
but i would like to hear what you guys personally think about it. (Keep in mind that if this may work, it would have worked with far less investments and the 400$ glasses everyone wanted)


And a more technical question: Did you try to calculate speccular only once from the middle of the eye? (thinking of object space shading)

This topic is closed to new replies.

Advertisement