Sign in to follow this  

Shaders for other parts of the graphics pipline...

This topic is 4707 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Does anyone know if there are current or future plans from the hardware manufacturers to allow shaders for other parts of the graphics pipeline? Better yet, does anyone have any original ideas as to what creative uses you could have for different types of shaders, or additional features in vertex/pixel shaders? Personally, I thought that a tessellation shader would be interesting. It could be useful to have full control over how primitives are built and spliced together, maybe opening up the way for new types of primitives or storing information in the resulting fragment that could be used by the pixel shader, such as how the fragment was generated, or what type of primitive it came from. I also thought that the rasterizer would be an interesting place to add some programmability. Perhaps to be able to create patterns that would cull certain pixels of a fragment and send others to the pixel shading unit (like a checker-board pattern). Or better yet, use a pattern mask for splitting the pixels up where some of them would go to one pixel shader and the others would go to a secondary pixel shader - perhaps with applications in image processing or post processing. So what do you guys think? Let's here some of those creative ideas or random thoughts that popped into your head during one of those late night programming sessions. [grin][grin]

Share this post


Link to post
Share on other sites
WGF / DirectX 10 will likely support 'geometry shaders' as they're sometimes called - that's the programmable tessellation hardware you're talking about. Programmable blending hardware is often requested (giving you full control over how a given fragment is blended with the pixel currently in the frame buffer, also potentially allowing you to do conditionals based on z / stencil values or other arbitrary values in your own buffers). That will probably happen at some point but not in the near future because it requires read-modify-write access to the frame buffer memory which is difficult to do efficiently with full programmability. Other possibilities are programmable filtering / sampling for textures and programmable anti-aliasing schemes.

Share this post


Link to post
Share on other sites
Next generation graphics hardware (nVidia's NV50 and ATI's R520) will hopefully support the unified shader concept that we've all been desiring for quite some time. Essentially, this means that there will be one generic shader language that is more robust and capable of handling many different situations. GPU programming will soon be commonly used for processing of arbitrary data.

Share this post


Link to post
Share on other sites
Quote:
Original post by Sages
Next generation graphics hardware (nVidia's NV50 and ATI's R520) will hopefully support the unified shader concept that we've all been desiring for quite some time. Essentially, this means that there will be one generic shader language that is more robust and capable of handling many different situations. GPU programming will soon be commonly used for processing of arbitrary data.


I have seen many examples of stream processing using GPUs, but what other types of situations are you referring to? I thought that the whole point of the GPU was to do a few things very fast whereas a CPU does a lot of different things fast, just not as fast. If the GPU is able to process arbitrary data, wouldn't it kind of be like having dual CPUs?

Share this post


Link to post
Share on other sites
Quote:
Original post by Jason Z
I have seen many examples of stream processing using GPUs, but what other types of situations are you referring to? I thought that the whole point of the GPU was to do a few things very fast whereas a CPU does a lot of different things fast, just not as fast. If the GPU is able to process arbitrary data, wouldn't it kind of be like having dual CPUs?

GPUs are suited to certain kinds of data processing tasks so it's probably not accurate to say they will soon be processing 'arbitrary' data but I think they will find increasing numbers of uses outside of straightforward polygon rendering. As GPUs become more flexible and more general purpose it will become easier to use them for a wider variety of problems but they will always excel at problems that require the same bit of code to be run on many data elements, with little branching or data dependencies across elements. They will never be very good at code that does lots of integer arithmetic and conditionals or code where there are many inter-dependencies in the data.

Share this post


Link to post
Share on other sites
Quote:
Original post by Sages
Next generation graphics hardware (nVidia's NV50 and ATI's R520) will hopefully support the unified shader concept that we've all been desiring for quite some time. Essentially, this means that there will be one generic shader language that is more robust and capable of handling many different situations. GPU programming will soon be commonly used for processing of arbitrary data.


I guess this confuses me since there is already a generic langage (e.g. HLSL). A unified shader core is a hardware implementation detail, and won't have much impact on shader authors. Today, ps_3_0 and vs_3_0 are virtually identicle. Unless you mean no pixel shaders and vertex shaders, just a 'shader' in the renderman sense? (This won't happen any time soon...)



Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
Programmer :: Quake 4 :: Raven Software


Shouldn't that be RavenSoftware::Quake4::Programmer ? :)

Share this post


Link to post
Share on other sites
Quote:
A unified shader core is a hardware implementation detail, and won't have much impact on shader authors.

From what I've read, its more than just a unified syntax - its about unified data storage and access. DirectX Next Early Preview is an interesting read on this subject.

One example was that you could read/modify/write vertex buffer data from your "generic" shader.. Could be quite powerful way of multi-passing stuff by storing intermediary results back into a texture/vertex buffer. Also, things like skinning for shadow-volume based rendering can be done once and stored for all subsequent passes.

I might be a bit off - I sometimes give up reading the PR that Nvidia/ATI spit out, a lot of it is plain rubbish tied up in over complicated explanations [smile].

Also, as for a "hardware implementation detail" have a look at Beyond3D.com: Differing Philosophies Emerge Between ATI and NVIDIA. It seems that a "Jack Of All Trades, Master Of None" scenario could be emerging if Nvidia is right.

Quote:
just a 'shader' in the renderman sense? (This won't happen any time soon...)

Agreed. I doubt they do it often, but I read somewhere that its possible to use a Renderman shader to read/write data over a standard network connection as well as use the random static from an unconnected line-in port on a sound card as the source of random numbers for a procedural texture...

Quote:
WGF / DirectX 10 will likely support 'geometry shaders' as they're sometimes called - that's the programmable tessellation hardware you're talking about.

I'd be betting that the better graphics programmers could do some seriously funky graphics if they could get hold of the tesselation/interpolation geometry code [smile]

Jack

Share this post


Link to post
Share on other sites
Quote:
Original post by jollyjeffers
... blah ...


Jack hit it on the head so I don't believe I need to respond to this one.

Quote:
Original post by jollyjeffers
I'd be betting that the better graphics programmers could do some seriously funky graphics if they could get hold of the tesselation/interpolation geometry code.

We have tesselation code now, the issue is that it isn't supported in hardware. With hardware tesselators you'd be able to take a 1,200 polygon character model and render it with a displacement map. The hardware would then tesselate the model and mold it based on the displacement map to create a perfectly smooth mesh. However, since most games aren't geometry limited these days, I don't see consumer level hardware tesselators in the near future. *sigh*

Share this post


Link to post
Share on other sites
Quote:
Original post by jollyjeffers

From what I've read, its more than just a unified syntax - its about unified data storage and access. DirectX Next Early Preview is an interesting read on this subject.

One example was that you could read/modify/write vertex buffer data from your "generic" shader.. Could be quite powerful way of multi-passing stuff by storing intermediary results back into a texture/vertex buffer. Also, things like skinning for shadow-volume based rendering can be done once and stored for all subsequent passes.

I might be a bit off - I sometimes give up reading the PR that Nvidia/ATI spit out, a lot of it is plain rubbish tied up in over complicated explanations [smile].



You can do much of this today - its just awkward, sinceu textures can be piped back through via texture read in vs_3_0. There is much to be gained by stating that all formats must be supported by both vs and ps, and also by facilitating stream output - but neither of these implies a unified shader core. It is unclear if this is a desirable thing to do or not.

Basically, if you mean by unified that both pixel and vertex shaders will have access to the same resources, then yes, this will happen. That is, textures, vertex buffers, constant buffers, etc will all be accessible anywhere. But isn't as huge a leap as it might sound at first, sinceu there is already some sharing in vs_3_0 and ps_3_0 for resources.

FYI: The Geometry Shader is not a tessellator, using it in such a way would be... abusive.




Share this post


Link to post
Share on other sites
being able to customize the depth buffer calculations would be one of the most important features in my opinion

imgine you could do depth buffer calculations for several lights at once and could store them in several buffers /texture units

thus i could create pointlight shadowmaps into 6 directions on the flow without passing the same geometry multiple times

these tessellation shaders sound pretty expensive for more

imagine storing a triangle in the gfx memory
so you want to tessellate it how would you do it?

splitting into several smaller triangles should be nearly impossible to do

the only thing i could thing about is applying a depth map onto the surface

if you look at it from the front it won t matter but if you look at it from the side you had to do a per pixel depthtest where you get the triangle plane position (+- the depthmap * scale)*normal
that find out which depthbuffer value to test against

+ you need to do normal recalculations to send the proper per pixel normals to the pixelshader + other things that need to be solved

would be a nice feature but do we really need it already??

shouldn t we unify modelformats and editing tools output at first before we go on with further pushing towards reality?

didn t microsoft and a few companies plane to release a standard modelling format last year?

Share this post


Link to post
Share on other sites
Quote:
Original post by Sages
We have tesselation code now, the issue is that it isn't supported in hardware. With hardware tesselators you'd be able to take a 1,200 polygon character model and render it with a displacement map. The hardware would then tesselate the model and mold it based on the displacement map to create a perfectly smooth mesh. However, since most games aren't geometry limited these days, I don't see consumer level hardware tesselators in the near future. *sigh*


Is this an offline rendering technique now? So the model would essentially be tesselated to the point that there is one vertex for every pixel on the display? I suppose it would be more likely an estimated tesselation level since the screen space position isn't known before the vertex shader. Sounds interesting. Does anyone have any references for information on this topic?

Share this post


Link to post
Share on other sites
Quote:
Original post by EvilDecl81
FYI: The Geometry Shader is not a tessellator, using it in such a way would be... abusive.

I was under the impression that 'geometry shader' was what they were calling the programmable tesselation stage. Maybe I'm wrong about that - what do you think 'geometry shader' refers to?

Share this post


Link to post
Share on other sites

This topic is 4707 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this