3D tv rendering

Started by
3 comments, last by MJP 12 years, 2 months ago
Hello All,

I have been looking for a decent and simple tutorial to show the steps required to render a native image that supports 3D so on a 3Dtv or Monitor you get depth in the image. If anyone knows of anything that may be of interest please share it.


Do I only need to double the framerate and offset the left and right images correctly or is there more to it?


My test tv will be http://www.panasonic.co.uk/html/en_GB/Products/VIERA+Flat+Screen+TV/2011+LED+%26+LCD+TV/TX-L37DT30B/Overview/7098157/index.html


Thanks in advance.

hinchy
"I have more fingers in more pies than a leper at a bakery!"
Advertisement
There is plenty more to it
Your game needs to intelligently control the separation and convergence of your cameras
All your rendering needs to be S3D compatible (volumetrics clouds etc.)
Your HUD needs to be S3D compatible
If you do it "right" your offset view matrix needs to include a shear which requires all your shader code to be written correctly
It is easy to get a quick S3D result, but hard to develop an actual game

Also on the output side of things, there are many different ways to transmit S3D video to a TV/Monitor
including different variations of framepacking, etc.
Thanks for the reply skytiger. Interesting sounds like there is quite a bit too it. Can you elaborate a bit more on why a volumetric or similar rendering system is required? I would have thought a normal polygon renderer would provide depth information just as well. So far as the output format goes I would gear it for my tv and not worry about anything else whilst I am learning and understanding all the nuancies of creating a 3d rendered image. Any further links anyone can provide will be most welcome.
"I have more fingers in more pies than a leper at a bakery!"
I converted a game to S3D in under an hour
but then realised that about 50% of all my graphics weren't working properly
Anything 2D (for instance off-screen particles, clouds, explosions) all had to be rebuilt from scratch
If you watch a 3D film you can see how they always end the scene with the correct separation and convergence to begin the next ...
in fact the whole movie is carefully designed to ensure smooth continuous changes to convergence and separation (otherwise the audience will throw up or get headaches)
In a video game this means finding ways to smoothly transition cameras during gameplay, at the start and end of (stereo) cutscenes, when menus appear etc.
The viewer can "drop out" of 3D for a few seconds if you surprise their eyes ...
Also careful and clever use of positive and negative parallax can give great results
but too much negative parallax makes many people sick, so in films they smoothly draw your eyes into negative parallax
and then a MONSTER will jump out, then cameras return to positive parallax
A good way to experience a naive approach to S3D is to play a PC game with a S3D driver (like iZ3D or TriDef) and see for yourself
GTASA is impressive in stereo but you need to constantly tweak convergence and separation (with keyboard shortcuts)
A good game will do this automatically ... the question is how?
Let's be realistic here: you do not have to go around converting all of your particles to proper volumetrics, nor is that actually a realistic thing to do. All of the 3D games on the market that I have played (including one that I shipped) have tons of billboarded particles and other 2D effects. Sure it looks crappy in 3D and in a perfect world you wouldn't do it, but it's hardly going to be the only shortcut that your game takes in terms of graphics. Especially when you have to render each frame twice.

Camera divergence however can definitely be a problem if you don't want people to puke or go cross-eyed while playing your game. My colleague spent some considerable time working on this.

This topic is closed to new replies.

Advertisement