Jump to content

  • Log In with Google      Sign In   
  • Create Account

We need your feedback on a survey! Each completed response supports our community and gives you a chance to win a $25 Amazon gift card!


Material/Shader implmentation


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
110 replies to this topic

#1 jamessharpe   Members   -  Reputation: 497

Like
Likes
Like

Posted 20 July 2003 - 12:47 AM

I''m thinking of using a pluggable dll system for my shaders/materials and just wanted to run my design past you guys. The shader dll exports a function which sets up all the applicable states required to obtain the effect. The only problem with this is that the dll doesn''t know what the curent set of states is. I am therefore thinking of passing a structure containing the current states of the renderer. Although my renderer is designed to be pluggable for different API''s I feel that the shader''s need to have a different function for each API, because of the possible different states in the renderer. My basic rendering loop then becomes: Call shader function(from dll, although some simple shaders could be in the renderer code itself, just use function pointers) for(each object that uses shader) Setup vertex arrays. Bind textures Render geometry using index array.

Sponsor:

#2 S1CA   Members   -  Reputation: 1400

Like
Likes
Like

Posted 20 July 2003 - 05:57 AM

Yep, passing a structure containing states is a good idea.

Even if you''re not using D3D as one of your APIs, I''d still recommend taking a look at D3DX Effects & Techniques and their files (.FX) for ideas - the concept is easily portable to OpenGL and even console APIs.

With .FX files you take the concept one step further, instead of going to the trouble of compiling each shader into a plugin DLL, the file is just a text file that that''s essentially a cross between a glorified .ini file and script.

The ability to change the operation of a shader in Notepad, while the app is running is really nice. If you add a bit of code to notify you when a shader file changes, the engine can do an automatic refresh and so the result of changes can be seen as you make them.


A look at other systems like ATIs RenderMonkey might be worthwhile too (i.e. let someone else write the editor tools for you...)

--
Simon O''Connor
ex -Creative Asylum
Programmer &
Microsoft MVP


#3 mohamed adel   Members   -  Reputation: 174

Like
Likes
Like

Posted 20 July 2003 - 07:56 AM

if you are using d3d you can use state locks to record current renderer state.

#4 jamessharpe   Members   -  Reputation: 497

Like
Likes
Like

Posted 20 July 2003 - 09:03 AM

I haven''t looked at .FX files yet, I''ll have to have a look into them - do you need vertex or pixel shaders to use them?

I think that I will use a hybrid system - allow dll''s and text file type shaders. I think that the dll shader could be quite powerful if you want to do more than just change states e.g. implementing stuff like perlin noise in hardware where you need the CPU to calculate some information. I think that describing these in a text file would be rather complex.

#5 Yann L   Moderators   -  Reputation: 1798

Like
Likes
Like

Posted 20 July 2003 - 09:42 AM

I have considered the shader script approach in our engine, but I finally went with dll shaders for the reasons jamessharpe mentioned. Here a short overview, perhaps it can give someone a couple of ideas:

All user defined shaders are derived from an exported base class. The base class provides basic functionality, helper functions (vertex and pixel shader management, Cg connector, etc), gives access to general engine states, and auto-registers the new derived shaders at a central shader registry.

When a render display is opened, and the system has parsed the hardware caps of the 3D card, the registered shader classes are queried one by one. If the hardware doesn't offer the features required by a class, it is removed from the list. If it does, then the shader is queried for the number and type of passes it requires to create the effect on the current hardware. The info is stored. A shader class can dispatch or forward parts of the work to a different shader class, that allows for multipass setups. Those dependencies are resolved in the next step. This bouncing and forwarding of shader passes opens the possibility to express effects such as eg. shadowmapping (render from light, store in render texture, render geometry with depth texture) or multipass reflect/refractions through this simple interface.

Those pretty complex steps are only done once on setup. If a scene is loaded, each chunk of geometry has an effect signature that gives the system an idea about how it should look like. Eg: "render this geometry chunk with bumpmapping, local shadows and EMBM reflections. I don't care about the details, just do it as good as you can on the current hardware". Those effect classes are connected to the shader class dependency chain outlined above. The system chooses the shader class that is best suited to render the required effect, and sets a table of function pointers to the appropriate shader (could also work through virtual functions, but I thought function pointers might be a bit more efficient in this case).

So basically, while rendering, it all boils down to a few function pointer calls: setup_shader, enter_shader, shader_params, fill_shader_cache, exit_shader.

Edit: I'd like to point out a very important feature of that system (or similar systems): suppose your game shipped, and two months later, a top-notch 3D card with superb features hit the market. All you have to do, is write a couple of new shader classes for this hardware, and give them the appropriate effect signature, but with a higher priority than the ones you previously used in the game. Compile as a dll, put onto your website. A 20k download for the user, he puts it into his game directory, restarts the game, and voilà: brand new top of the line effects on the new 3D card, without changing anything in the game (as the shader dependency resolver recognizes the effect signature, and overides the old shader classes with lower priority).


[edited by - Yann L on July 20, 2003 4:51:48 PM]

#6 Ingenu   Members   -  Reputation: 932

Like
Likes
Like

Posted 20 July 2003 - 10:19 PM

I have both systems, a ''script'' approach with D3D/OpenGL compatibility, and a PlugIn/DLL approach too.
Altough I''ve not toyed with he PlugIns yet, and the script version is VERY fast.

I use a State approach (who doesn''t ?), TMUStates, VPStates, FPStates and GeneralStates (the later needs to get a better name BTW).
It works well.

Also those States objects are built by my renderer, which can be D3D or OpenGL, the only ''drawback'' atm, it that you cannot change Renderer during the game. You need to exit and relaunch it with the new API/Renderer...

Note however that it''s not a design problem, much more a lack of will to code it ^^ (it can be done, but I don''t see the point)

-* So many things to do, so little time to spend. *-


#7 jamessharpe   Members   -  Reputation: 497

Like
Likes
Like

Posted 21 July 2003 - 05:41 AM

quote:
Original post by Yann L
A shader class can dispatch or forward parts of the work to a different shader class, that allows for multipass setups. Those dependencies are resolved in the next step. This bouncing and forwarding of shader passes opens the possibility to express effects such as eg. shadowmapping (render from light, store in render texture, render geometry with depth texture) or multipass reflect/refractions through this simple interface.



Is this a pre-process step of the render-queue? I can see that if multipass is required then there may be shader state switches that could be made redundant by sorting.

quote:

So basically, while rendering, it all boils down to a few function pointer calls: setup_shader, enter_shader, shader_params, fill_shader_cache, exit_shader.



Let me check I have this right:

setup_shader - sets any states that are required to render the effect

enter_shader - called each time a primitive is drawn ( like glBegin)

shader_params - bind textures required and shader specific parameters

fill_shader_cache - pass the shader the index and vertex buffers

exit_shader - tell the engine we''ve finished the primitive(draw buffer)

This means moving calls to stuff like glDrawElements into the shaders correct? I suppose the engine provides some basic shaders e.g. gouraud shading, single texture, two textures etc..

I suppose that in the end all shaders(ignoring pixel and vertex shaders) could be implemented as multipasses of these ''basic'' shaders.

The advantage I see of this system over that of something like quake3''s renderer codepaths is that it is more dynamic i.e. if a feature is supported it is used and also a fallback system can be made using a priority system - which would also enable us to have an easy way to adjust the render detail level.

I''d totally missed the possibility of using an abstract base class for the shader interface, I was going to use just function pointers, but I prefer the ABC method.

#8 Yann L   Moderators   -  Reputation: 1798

Like
Likes
Like

Posted 21 July 2003 - 06:53 AM

quote:
Original post by jamessharpe
Is this a pre-process step of the render-queue? I can see that if multipass is required then there may be shader state switches that could be made redundant by sorting.


Actually, it is not done at render time, but as a one-time preprocess after a new scene has been loaded. It can take a couple of seconds, depending on the size of the scene, and the complexity of the shader dependencies. Essentially, for each geometry chunk in the scene, a best fit shader combination (with as few passes as possible on the current HW, and best possible quality) is evaluated, and stored with the mesh chunk.

Later on, in the actual render loop, this stored information is then simply used to call the appropriate shaders at the right time. An additional state-change optimizing sorting pass is applied each frame, after all visible chunks have been determined, and before the lists are dispatched to the shaders. It's a simple 48bit radix sort, sorting on shader ID and shader param pointer (so that shaders with the same parameter set are grouped).

quote:

Let me check I have this right:
[...]


Almost:

setup_shader - called once after the scene was loaded. Gives the selected shaders the opportunity to create some internal data, eg. normalization cubemaps, register vertex and pixel shaders with the engine, etc. It's pretty much the constructor of the shader. You can't put it into the real class ctor, since this one will be called at a time, where the render subsystem is not yet initialized. The destruction of those shader states is done through garbage collection, when the scene is closed.

enter_shader - sets any states that are required to render the effect

shader_params - called each time a primitive (geometry chunk) is drawn ( like glBegin), bind textures required and shader specific parameters. Called using lazy-evaluation, ie. only if the states changed from the last geometry chunk (supported by the radix sorting prior to mesh dispatching).

fill_shader_cache - Only called, if the geometry chunk is not yet cached in VRAM. Used to fill the vertices into VRAM, with the components required by the shader. If this data is still in the cache from the last frame, this function is not called.

exit_shader - Called, as soon as we are done with the shader, and a different one is selected. Typically, there is not very much in this function, but it can eg. pop back matrices that were pushed during enter_shader.

quote:

This means moving calls to stuff like glDrawElements into the shaders correct?


No, only the cache filling is done by the shader. Only the shader knows exactly, what components it needs in a vertex array. glDrawElements is called from the main render loop on the cache entry, outside of the shader.

quote:

I suppose the engine provides some basic shaders e.g. gouraud shading, single texture, two textures etc..


Yes, it provides basic fucntionality, but on a more abstract level (ie. not depending on the number of textures). For example: simple Gouraud, diffuse texture, diffuse and bump, diffuse + bump + specularity, etc. All in all, the engine provides approx. 50 basic shader types, but most of them reflection/refraction oriented. You can get basic geometry with good quality (ie. incl. specular maps + bump maps) by using, I'd say, 5 or 6 shaders.

quote:

I suppose that in the end all shaders(ignoring pixel and vertex shaders) could be implemented as multipasses of these 'basic' shaders.


Yep, that's the idea. And that includes pixel and vertex shaders, since the abstract base class provides vertex and fragment shader management. An external shader can register it's VP/FP (either as ASM or as Cg) with the system, and those will be activated as needed.

quote:

The advantage I see of this system over that of something like quake3's renderer codepaths is that it is more dynamic i.e. if a feature is supported it is used and also a fallback system can be made using a priority system - which would also enable us to have an easy way to adjust the render detail level.


Right. You don't really have to care about a fallback system, as it is implicitely given by selecting appropriate shader priorities.


[edited by - Yann L on July 21, 2003 1:56:57 PM]

#9 jamessharpe   Members   -  Reputation: 497

Like
Likes
Like

Posted 21 July 2003 - 08:00 AM

quote:
Original post by Yann L

Actually, it is not done at render time, but as a one-time preprocess after a new scene has been loaded. It can take a couple of seconds, depending on the size of the scene, and the complexity of the shader dependencies. Essentially, for each geometry chunk in the scene, a best fit shader combination (with as few passes as possible on the current HW, and best possible quality) is evaluated, and stored with the mesh chunk.




So the shader dependancy chain for the effect is looked up and the geometry is duplicated in the render queue with each applicable shader, and this is actually contained within the mesh representation in the engine, correct? So every geometry chunk passed to the renderer requires no more than one pass?

quote:

Yes, it provides basic fucntionality, but on a more abstract level (ie. not depending on the number of textures). For example: simple Gouraud, diffuse texture, diffuse and bump, diffuse + bump + specularity, etc. All in all, the engine provides approx. 50 basic shader types, but most of them reflection/refraction oriented. You can get basic geometry with good quality (ie. incl. specular maps + bump maps) by using, I''d say, 5 or 6 shaders.



How are textures managed then? Are they specified in the shader params e.g. no of texture layers, handles to texture to be used, or is some other system used?

quote:

Right. You don''t really have to care about a fallback system, as it is implicitely given by selecting appropriate shader priorities.



I realise that, but it is a useful tool for debugging and sometimes a player may want to switch off a particular feature in the game in order to achieve a higher framerate for a more responsive game.

quote:

And that includes pixel and vertex shaders, since the abstract base class provides vertex and fragment shader management. An external shader can register it''s VP/FP (either as ASM or as Cg) with the system, and those will be activated as needed.



My hardware doesn''t support pixel or vertex shaders, so I won''t be implementing them for a while(perhaps in Directx with the software drivers(slow!)), but I suppose I ought to look in to how they are coded in order to make sure I don''t stop them being easily integrated.



#10 Yann L   Moderators   -  Reputation: 1798

Like
Likes
Like

Posted 21 July 2003 - 12:14 PM

quote:
Original post by jamessharpe
So the shader dependancy chain for the effect is looked up and the geometry is duplicated in the render queue with each applicable shader, and this is actually contained within the mesh representation in the engine, correct?


The geometry itself is not duplicated, that would cost too much memory. It is simply instanciated for each shader pass, with a pointer onto the actual mesh data. So, for example, if geometry chunk A has bounced through 3 shaders (using 3 passes), the dependency manager will create 3 instances of that geometry, each containing a pointer to the original mesh chunk data.

quote:

So every geometry chunk passed to the renderer requires no more than one pass?


Each geometry chunk instance passed to the renderer requires exactly one pass by one specific shader, yes.

quote:

How are textures managed then? Are they specified in the shader params e.g. no of texture layers, handles to texture to be used, or is some other system used?


As you said, their handles (32bit GUIDs) are specified in the shader params. One for each texture type the shader might need: diffuse1 to 8, bump1 and 2 (for detail bumpmaps), specularity, opacity, reflectivity, refractivity, etc, depending on the effect type. Each selected shader can then extract and use the texture handles it needs.

quote:

quote:

Right. You don''t really have to care about a fallback system, as it is implicitely given by selecting appropriate shader priorities.



I realise that, but it is a useful tool for debugging and sometimes a player may want to switch off a particular feature in the game in order to achieve a higher framerate for a more responsive game.


Yes, of course. I didn''t mean that you wouldn''t be able to control it manually (you can), but that you don''t have to build in separate code paths. User defined quality params can simply be included as additional constraints in the dependency chain. Everytime the user has changed the visual parameters, simply resolve the dependencies, and continue rendering. Everything will automatically adjust to the new situation. That system could theoretically even handle hot-plugging of the 3D card

quote:

My hardware doesn''t support pixel or vertex shaders, so I won''t be implementing them for a while(perhaps in Directx with the software drivers(slow!)), but I suppose I ought to look in to how they are coded in order to make sure I don''t stop them being easily integrated.


That''s not so hard. You can simply treat VP/FPs as two additional shader states. In shader_setup, you register your VP and FPs, and get back a handle. When using a shader, you activate your VP/FP using that handle you got earlier. Just a simple additional state.


#11 jamessharpe   Members   -  Reputation: 497

Like
Likes
Like

Posted 22 July 2003 - 02:33 AM

Thanks, I think I''ve got the idea sorted now - all I have to do now is implement it!

James

#12 EvilDecl81   Members   -  Reputation: 360

Like
Likes
Like

Posted 22 July 2003 - 11:54 AM

quote:
Original post by jamessharpe
I haven''t looked at .FX files yet, I''ll have to have a look into them - do you need vertex or pixel shaders to use them?

I think that I will use a hybrid system - allow dll''s and text file type shaders. I think that the dll shader could be quite powerful if you want to do more than just change states e.g. implementing stuff like perlin noise in hardware where you need the CPU to calculate some information. I think that describing these in a text file would be rather complex.



You do not need to use pixel or vertex shaders for effect files (they can represent fixed function modes), nor do you have to use HLSL (assembly shaders will work as well).

Effect files are actually capable of performing aribtrary math setup and can generate procuderal textures (you need a couple lines of app code to do this), including Perlin Noise Functions if so desired. Basically, you can write HLSL expressions anywhere you want. With DX9 Semantics, vertex data binding is somewhat built-in, making it fairly standard and queriable.

Since Effect Files have an annotation system, you can add whatever extensions you want as long as you write app code to parse them.

#13 jamessharpe   Members   -  Reputation: 497

Like
Likes
Like

Posted 23 July 2003 - 05:29 AM

quote:
Original post by EvilDecl81
quote:
Original post by jamessharpe
I haven''t looked at .FX files yet, I''ll have to have a look into them - do you need vertex or pixel shaders to use them?

I think that I will use a hybrid system - allow dll''s and text file type shaders. I think that the dll shader could be quite powerful if you want to do more than just change states e.g. implementing stuff like perlin noise in hardware where you need the CPU to calculate some information. I think that describing these in a text file would be rather complex.



You do not need to use pixel or vertex shaders for effect files (they can represent fixed function modes), nor do you have to use HLSL (assembly shaders will work as well).

Effect files are actually capable of performing aribtrary math setup and can generate procuderal textures (you need a couple lines of app code to do this), including Perlin Noise Functions if so desired. Basically, you can write HLSL expressions anywhere you want. With DX9 Semantics, vertex data binding is somewhat built-in, making it fairly standard and queriable.

Since Effect Files have an annotation system, you can add whatever extensions you want as long as you write app code to parse them.


But i want an API independant solution. Granted I will probably have to rewrite the dll code for each renderer that I want to use, but this is of little effort really.

Otherwise I will have to implement the DX9 effect file system in OpenGL which seems to me to be a little pointless, when A higher level of abstraction is feasible.

James


#14 mattnewport   GDNet+   -  Reputation: 1029

Like
Likes
Like

Posted 24 July 2003 - 12:54 AM

I believe that nVidia''s CgFX is compatible with DX9 effect files and it is available for OpenGL and D3D. If you''re looking for an API independent solution it''s worth investigating. Your dll solution is API independent but it''s not platform independent and I see little point in supporting both OpenGL and D3D unless you''re planning on a Linux release.

I really think you should look at D3D effect files and CgFX before embarking on your pluggable dll scheme. Even if you decide not to go the text file shader route you''ll probably find you get some useful ideas from the way they handle things. The D3D effect file system is really quite powerful and flexible and once you''ve tried playing with effects in EffectEdit where you can change your shaders and see the changes in real time without any need to recompile or even restart your application you''ll start to appreciate the benefits of this system.

#15 jamessharpe   Members   -  Reputation: 497

Like
Likes
Like

Posted 24 July 2003 - 06:48 AM

I think that I am going for the dll method, but this gives me the flexibility. I acn easily write a system that if it finds that a dll for an effect does not exist it can try to load a FX file instead since the shader class can act as a wrapper for the CG code.

As to supporting OpenGL and D3D: it's not really that I want to support both completely. I just feel that if I can abstract the render operations to a level where both CAN be supported, then this will give a flexible system for the future. Say next year a card came out which has it's own API(like happened with 3dfx cards) or a totally new API came available, or even if you want to write your own software renderer, being tied to a API specific feature makes it all that bit harder to port. If I come up with a neat way of implementing this system with support for stuff like cg and HLSL and FX files, I may consider creating an article based on this idea - or even a couple of articles looking at creating a pluggable abstract render framework.

[edited by - jamessharpe on July 24, 2003 1:57:11 PM]

#16 _DarkWIng_   Members   -  Reputation: 602

Like
Likes
Like

Posted 24 July 2003 - 07:54 AM

I just have a few questions. How do you pass parametrs to shader. Like bumpmapping shader needs lights (their number, position, type...),... It should be trough Yann''s "shader_params" function but what I don''t understand is who sets them. Shader itself or some other part of the system? And where is this info (local lights,...) stored? Together with geonetry chunk?

In my old engine I used a shader_update function that called other systems to get required info, but that was kind of messy so I would like to know how others do it.

You should never let your fears become the boundaries of your dreams.

#17 Yann L   Moderators   -  Reputation: 1798

Like
Likes
Like

Posted 24 July 2003 - 04:58 PM

quote:
Original post by _DarkWIng_
I just have a few questions. How do you pass parametrs to shader. Like bumpmapping shader needs lights (their number, position, type...),... It should be trough Yann''s "shader_params" function but what I don''t understand is who sets them. Shader itself or some other part of the system? And where is this info (local lights,...) stored? Together with geonetry chunk?


Depends on the type of shader data. I separated the data in two distinct types: global scene shader data, and local geometry chunk data. You can subdivide it in more types, if you want.

Local geometry chunk shader data represents the shader states that are tightly associated to an individual mesh chunk, but have no real connection to the scene itself. For example texture IDs (each chunk is associated to one or more different texture layers, and each has an ID), or simply the chunk colour, opacity, reflectivity, IOR, etc. This data is created by the precalc system (scene compiler), and stored with the 3D file.

Global scene shader data represents the shader states that aren''t associated with any particular mesh chunk, but rather with the entire scene. Examples include light sources, sunlight direction, skylight distribution, atmospheric parameters, volume fog areas, etc. Those parameters are accessible to any shader through a standarized shared interface. Each shader can make use of them, but it''s optional.

For example, consider a perpixel DOT3 shader, using a diffuse texture, a bumpmap, a normalization cubemap, and the four nearest lightsources to get the lighting done:

Local geometry chunk shader data:
* Diffuse texture GUID: 00000001
* Bump texture GUID: 00000002
* Opacity: 100%
...etc...

Global scene shader data:
* All lightsources in the scene
* Current viewpoint position
* Skylight
...etc...

The shader gets the local mesh chunk data for the geometry chunk it currently processes through a parameter to shader_params(). It can access the global scene data through a standard interface included in the shader baseclass.

First, it would bind the textures specified in the local data, set the opacity, etc:

MyCoolShader::shader_params(const Chunk &C)
{
BindTexture(UNIT0, C.LocalData->DiffuseTex);
BindTexture(UNIT1, C.LocalData->BumpTex);
BindTexture(UNIT2, Tools->SharedNormalizationCubeMap());
...etc...


Then, it would use a utility function also provided by the baseclass, in order to retrieve the four nearest lightsources, and copy them into vertex program registers:


CLightPool *LightPool = Tools->GetNearestLights(C, GlobalData->Lights, 4);

VPControl->LoadRegister(0, LightPool[0]);
VPControl->LoadRegister(1, LightPool[1]);
VPControl->LoadRegister(2, LightPool[2]);
VPControl->LoadRegister(3, LightPool[3]);

... etc, enable other states, enable VP ...
}


That''s it, the geometry can now be rendered.


#18 Ingenu   Members   -  Reputation: 932

Like
Likes
Like

Posted 24 July 2003 - 09:51 PM

For informations see :
http://mirror.ati.com/developer/gdc/AtiGDC02Vlachos.PDF

It''s all about implementing a Shader Lib cross platform for a game engine.

Additionnaly you can check other papers here :
http://mirror.ati.com/developer/techpapers.html
(I believe there''s another one speaking about HLSL in ShaderLib)


-* So many things to do, so little time to spend. *-


#19 davidino79   Members   -  Reputation: 156

Like
Likes
Like

Posted 24 July 2003 - 11:32 PM

I haven ''t completely understood how you resolve the dependencies between the shaders.How do you connect one shader with another?

Davide

#20 jamessharpe   Members   -  Reputation: 497

Like
Likes
Like

Posted 25 July 2003 - 08:36 AM

A couple more questions:

1. If we are applying a sort by the shader, how do we deal with transparent objects which require a back to front rendering order? Do we need to query the shader as to whether it implements transparency and draw this shader last?

2. I can''t think of a neat way of coding the desired look of the rendered object i.e. stating that you want it bump-maped environment mapped etc... I could use a bit field with each bitfield representing a particular render effect.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS