Shaders for other parts of the graphics pipline...

Started by
11 comments, last by mattnewport 19 years, 3 months ago
Does anyone know if there are current or future plans from the hardware manufacturers to allow shaders for other parts of the graphics pipeline? Better yet, does anyone have any original ideas as to what creative uses you could have for different types of shaders, or additional features in vertex/pixel shaders? Personally, I thought that a tessellation shader would be interesting. It could be useful to have full control over how primitives are built and spliced together, maybe opening up the way for new types of primitives or storing information in the resulting fragment that could be used by the pixel shader, such as how the fragment was generated, or what type of primitive it came from. I also thought that the rasterizer would be an interesting place to add some programmability. Perhaps to be able to create patterns that would cull certain pixels of a fragment and send others to the pixel shading unit (like a checker-board pattern). Or better yet, use a pattern mask for splitting the pixels up where some of them would go to one pixel shader and the others would go to a secondary pixel shader - perhaps with applications in image processing or post processing. So what do you guys think? Let's here some of those creative ideas or random thoughts that popped into your head during one of those late night programming sessions. [grin][grin]
Advertisement
WGF / DirectX 10 will likely support 'geometry shaders' as they're sometimes called - that's the programmable tessellation hardware you're talking about. Programmable blending hardware is often requested (giving you full control over how a given fragment is blended with the pixel currently in the frame buffer, also potentially allowing you to do conditionals based on z / stencil values or other arbitrary values in your own buffers). That will probably happen at some point but not in the near future because it requires read-modify-write access to the frame buffer memory which is difficult to do efficiently with full programmability. Other possibilities are programmable filtering / sampling for textures and programmable anti-aliasing schemes.

Game Programming Blog: www.mattnewport.com/blog

Next generation graphics hardware (nVidia's NV50 and ATI's R520) will hopefully support the unified shader concept that we've all been desiring for quite some time. Essentially, this means that there will be one generic shader language that is more robust and capable of handling many different situations. GPU programming will soon be commonly used for processing of arbitrary data.
Quote:Original post by Sages
Next generation graphics hardware (nVidia's NV50 and ATI's R520) will hopefully support the unified shader concept that we've all been desiring for quite some time. Essentially, this means that there will be one generic shader language that is more robust and capable of handling many different situations. GPU programming will soon be commonly used for processing of arbitrary data.


I have seen many examples of stream processing using GPUs, but what other types of situations are you referring to? I thought that the whole point of the GPU was to do a few things very fast whereas a CPU does a lot of different things fast, just not as fast. If the GPU is able to process arbitrary data, wouldn't it kind of be like having dual CPUs?
Quote:Original post by Jason Z
I have seen many examples of stream processing using GPUs, but what other types of situations are you referring to? I thought that the whole point of the GPU was to do a few things very fast whereas a CPU does a lot of different things fast, just not as fast. If the GPU is able to process arbitrary data, wouldn't it kind of be like having dual CPUs?

GPUs are suited to certain kinds of data processing tasks so it's probably not accurate to say they will soon be processing 'arbitrary' data but I think they will find increasing numbers of uses outside of straightforward polygon rendering. As GPUs become more flexible and more general purpose it will become easier to use them for a wider variety of problems but they will always excel at problems that require the same bit of code to be run on many data elements, with little branching or data dependencies across elements. They will never be very good at code that does lots of integer arithmetic and conditionals or code where there are many inter-dependencies in the data.

Game Programming Blog: www.mattnewport.com/blog

Quote:Original post by Sages
Next generation graphics hardware (nVidia's NV50 and ATI's R520) will hopefully support the unified shader concept that we've all been desiring for quite some time. Essentially, this means that there will be one generic shader language that is more robust and capable of handling many different situations. GPU programming will soon be commonly used for processing of arbitrary data.


I guess this confuses me since there is already a generic langage (e.g. HLSL). A unified shader core is a hardware implementation detail, and won't have much impact on shader authors. Today, ps_3_0 and vs_3_0 are virtually identicle. Unless you mean no pixel shaders and vertex shaders, just a 'shader' in the renderman sense? (This won't happen any time soon...)



EvilDecl81
Quote:Programmer :: Quake 4 :: Raven Software


Shouldn't that be RavenSoftware::Quake4::Programmer ? :)
Quote:A unified shader core is a hardware implementation detail, and won't have much impact on shader authors.

From what I've read, its more than just a unified syntax - its about unified data storage and access. DirectX Next Early Preview is an interesting read on this subject.

One example was that you could read/modify/write vertex buffer data from your "generic" shader.. Could be quite powerful way of multi-passing stuff by storing intermediary results back into a texture/vertex buffer. Also, things like skinning for shadow-volume based rendering can be done once and stored for all subsequent passes.

I might be a bit off - I sometimes give up reading the PR that Nvidia/ATI spit out, a lot of it is plain rubbish tied up in over complicated explanations [smile].

Also, as for a "hardware implementation detail" have a look at Beyond3D.com: Differing Philosophies Emerge Between ATI and NVIDIA. It seems that a "Jack Of All Trades, Master Of None" scenario could be emerging if Nvidia is right.

Quote:just a 'shader' in the renderman sense? (This won't happen any time soon...)

Agreed. I doubt they do it often, but I read somewhere that its possible to use a Renderman shader to read/write data over a standard network connection as well as use the random static from an unconnected line-in port on a sound card as the source of random numbers for a procedural texture...

Quote:WGF / DirectX 10 will likely support 'geometry shaders' as they're sometimes called - that's the programmable tessellation hardware you're talking about.

I'd be betting that the better graphics programmers could do some seriously funky graphics if they could get hold of the tesselation/interpolation geometry code [smile]

Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

Quote:Original post by jollyjeffers
... blah ...


Jack hit it on the head so I don't believe I need to respond to this one.

Quote:Original post by jollyjeffers
I'd be betting that the better graphics programmers could do some seriously funky graphics if they could get hold of the tesselation/interpolation geometry code.

We have tesselation code now, the issue is that it isn't supported in hardware. With hardware tesselators you'd be able to take a 1,200 polygon character model and render it with a displacement map. The hardware would then tesselate the model and mold it based on the displacement map to create a perfectly smooth mesh. However, since most games aren't geometry limited these days, I don't see consumer level hardware tesselators in the near future. *sigh*
Quote:Original post by jollyjeffers

From what I've read, its more than just a unified syntax - its about unified data storage and access. DirectX Next Early Preview is an interesting read on this subject.

One example was that you could read/modify/write vertex buffer data from your "generic" shader.. Could be quite powerful way of multi-passing stuff by storing intermediary results back into a texture/vertex buffer. Also, things like skinning for shadow-volume based rendering can be done once and stored for all subsequent passes.

I might be a bit off - I sometimes give up reading the PR that Nvidia/ATI spit out, a lot of it is plain rubbish tied up in over complicated explanations [smile].



You can do much of this today - its just awkward, sinceu textures can be piped back through via texture read in vs_3_0. There is much to be gained by stating that all formats must be supported by both vs and ps, and also by facilitating stream output - but neither of these implies a unified shader core. It is unclear if this is a desirable thing to do or not.

Basically, if you mean by unified that both pixel and vertex shaders will have access to the same resources, then yes, this will happen. That is, textures, vertex buffers, constant buffers, etc will all be accessible anywhere. But isn't as huge a leap as it might sound at first, sinceu there is already some sharing in vs_3_0 and ps_3_0 for resources.

FYI: The Geometry Shader is not a tessellator, using it in such a way would be... abusive.




EvilDecl81

This topic is closed to new replies.

Advertisement