Sign in to follow this  
JDX_John

3D

Recommended Posts

JDX_John    292
Do modern graphics cards have support for developing games that use 3D (so confusing having 3D for 2 meanings)? And if so is that done automatically by the car or do you have to use additional APIs? I would have thought since you have a 3D scene to start with, the GPU should be able to automatically output 3D but what do I know?

I'm thinking consumer/specialist hardware, i.e. a regular PC but with high-end GPU and special screens... something you'd use for scientific visualization rather than mass distribution to Joe Public.

Share this post


Link to post
Share on other sites
SimonForsman    7642
[url="http://www.nvidia.com/object/3dtv-play.html"]http://www.nvidia.com/object/3d-vision-home-users.html[/url]

Its fairly automatic but there are things you can do as a game developer to make it work better or worse. (Mixing in 2D objects require you to pay some extra attention for example)

http://developer.download.nvidia.com/whitepapers/2010/3DV_BestPracticesGuide.pdf

is a good resource.

Share this post


Link to post
Share on other sites
XXChester    1364
Graphics cards support real 3D and have for quite some time. But I believe they all require 3D glasses to be worn. WoW has real 3D effects apparently (I have never tried it but I remember reading about it a few years ago).

Share this post


Link to post
Share on other sites
Dunge    405
[quote name='XXChester' timestamp='1307540395' post='4820928']
Graphics cards support real 3D and have for quite some time. But I believe they all require 3D glasses to be worn. WoW has real 3D effects apparently (I have never tried it but I remember reading about it a few years ago).
[/quote]

If you have an nvidia card and you go in the nvidia control panel, there's a stereoscopy menu option where you can activate 3D for any games. You can choose which type you want (red/green glasses or Real3D glasses). I remember playing the first Assassin Creed with those red/green glasses and it worked quite nicely.

Share this post


Link to post
Share on other sites
Hodgman    51334
[quote name='JDX_John' timestamp='1307580084' post='4821152']No cards support simply rendering to say a 3D TV, they all have their own bespoke setups?[/quote]Did you not read the replies??
First link:[quote][size="1"]Turn [/size][b]your 3D TV[/b] [size="1"]into the ultimate, high-definition, 3D entertainment experience by[/size] [b]connect[/b][size="1"]ing[/size] [b]it to your[/b] [size="1"]NVIDIA® GeForce® GPU-powered[/size] [b]PC [/b][size="1"]or notebook.[/size][/quote]

Share this post


Link to post
Share on other sites
Ashaman73    13715
[quote name='JDX_John' timestamp='1307537449' post='4820914']
Do modern graphics cards have support for developing games that use 3D (so confusing having 3D for 2 meanings)? And if so is that done automatically by the car or do you have to use additional APIs? I would have thought since you have a 3D scene to start with, the GPU should be able to automatically output 3D but what do I know?

I'm thinking consumer/specialist hardware, i.e. a regular PC but with high-end GPU and special screens... something you'd use for scientific visualization rather than mass distribution to Joe Public.
[/quote]
3D vision for PC is quite old (remember these old 3D elsar shutter glasses ?). At these times doing 3D was quite simple, all common 3d APIs (DX/OGL) used some kind of ModelViewProjection matrix. The trick was, that the driver shifted the ModelViewProjection matrix slightly to the left/right in synchronisation with the shutter glasses and video signal. It worked out of the box for most games.

Nowadays the APIs are so advanced that almost every game uses shaders which leaves the responsibility to how to convert your vertices to which rendering space completly to the developer. This makes it incredible difficult for a driver to automatically modify the view space as long as you do not follow the guildlines of the video chip manufacturers.

To be honest, as long as the major GPU manufacturers(atleast Nvidia/Amd, maybe Intel) do not make a consistent guideline or API extension , 3D for PC games will fail again. Maybe the next generation of consoles will provide out of the box 3D support or engines like UDK/Unity, but only time will show.

Share this post


Link to post
Share on other sites
JDX_John    292
[quote name='Hodgman' timestamp='1307582631' post='4821155']
[quote name='JDX_John' timestamp='1307580084' post='4821152']No cards support simply rendering to say a 3D TV, they all have their own bespoke setups?[/quote]Did you not read the replies??[/quote]I read the replies, but I hadn't read those links yet:

[quote name='Dunge' timestamp='1307541021' post='4820929']
[quote name='XXChester' timestamp='1307540395' post='4820928']
Graphics cards support real 3D and have for quite some time. But I believe they all require 3D glasses to be worn. WoW has real 3D effects apparently (I have never tried it but I remember reading about it a few years ago).
[/quote]

If you have an nvidia card and you go in the nvidia control panel, there's a stereoscopy menu option where you can activate 3D for any games. You can choose which type you want (red/green glasses or Real3D glasses). I remember playing the first Assassin Creed with those red/green glasses and it worked quite nicely.
[/quote]

Apologies, it was late when I was reading the replies.



[quote name='Ashaman73' timestamp='1307597932' post='4821209']
Nowadays the APIs are so advanced that almost every game uses shaders which leaves the responsibility to how to convert your vertices to which rendering space completly to the developer. This makes it incredible difficult for a driver to automatically modify the view space as long as you do not follow the guildlines of the video chip manufacturers.[/quote]I'm not sure I follow. Regardless of fixed-function or shaders, you end up writing pixels in 3D space. So why shouldn't the GPU be able to position 2 virtual cameras like how 3D filming is done? I don't see why that couldn't be automated, unlike real life you actually have [i]more[/i] 3D information to start with.

Share this post


Link to post
Share on other sites
Ashaman73    13715
[quote name='JDX_John' timestamp='1307606718' post='4821239']
I'm not sure I follow. Regardless of fixed-function or shaders, you end up writing pixels in 3D space. So why shouldn't the GPU be able to position 2 virtual cameras like how 3D filming is done? I don't see why that couldn't be automated, unlike real life you actually have [i]more[/i] 3D information to start with.
[/quote]
Because the developer decides how to transform a vertex to a pixel on screen any way he wants, there's no long a "fixed" pipeline.

Take a deferred renderer as example. You could define some fancy matrices, including some inverse transformations etc. and transfer them to the shaders via uniforms. How should the GPU know which matrix does what ? At the end you have a transformed vertex in any space,it could be camera space, it could be world space, it could be screen space. When the GPU knows to change your transformation what about light positions which are already transfered into camera space via the CPU and which are used in a post processing step ? These light position needs to be changed too. On the other hand a full screen quad for a post process should not be modified.

Even the z-buffer is not relieable. Think about transparent particles which are rendered without writting depth to the z-buffer, or think about a deferred shader which decodes the depth value as color in a secondary render target.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this