idea (probably nothing new ;-) )

Started by
7 comments, last by JinJo 19 years, 10 months ago
When I was revising my engine (done that a few times now) I thought of a way to make the renderer easier to use (for artists at least). Most people have been implementing some form of the shader/materials thread which is pretty famous on this board now and most have each shader implementation in a dll (which was what I had). However I didn''t like having to code each new shader (not that I done much) so I decided to use a scripting language. What I''m doing now is to have a shader class but each shader is actually an instance of this class (i.e not derived in a dll). Now the class has all the appropriate tools you may need etc but it has a load from file function. and a few ptrs to different parts of the file (for each function like entershader etc). Now I code the shader using LUA which has access to my renderer for setting up graphics states etc. When calling a shader function it seeks to the part in the file where the function code is located and executes the script. There is other info in this file to load at startup etc which has most of the normal init code and variables. Now I know that materials are often defined through scripts (like the ATI demos) but I''m not sure if anyones done this but either way I havn''t fully implemented it (stuck on a dll problem which i wish someone could help with ;-) ) but would this be good, artists can make the shaders easily?, would it be too slow do you think? I hope that this may inspire people if it is a good idea.
Advertisement
The Microsoft Effects system that''s part of DirectX 9 does pretty much what you describe. I don''t think it''s actually that useful for artists - not many artists are technical enough to write shaders and even fewer are technical enough to write shaders that could actually be used in the game (fast enough and with fallbacks for older hardware). This kind of system is great for programmers though - take a look at the EffectEdit tool that comes with the DX SDK or at RenderMonkey or nVIDIA''s FXComposer. The great thing about text-file based scripted shaders is that you can edit them and see the results in real time without having to go through the edit-build-test cycle when writing shaders in C++.

Some of these tools go further and hook up shader inputs to UI widgets that can be adjusted by most artists which gives them a great environment to tweak their art interactively.

Game Programming Blog: www.mattnewport.com/blog

I think what would be useful is a class that can take arbitrary vertex/fragment programs - CG is a good candidate, that you can use as a test bed for new shaders, simply changing where the program looks for the shader is simple. However, I''d probably want to hard code stuff into it''s own shader once I''ve done the initial testing i.e. minimise state changes to the bare minimum/ place state changes into display lists etc..

It is a good idea for development, but I think that a scripting language for all of the state calls may have too large a hit on performance.

James
Actually, before I left active development, we were doing extensive research on something similar. The idea was to replace the directly linked shaders with meta-scripts. The scripts would not be shaders themselves, but describe them through an abstract language. At startup, those script would be executed, generate appropriate shaders on the fly (using the current hardware profile), and link these procedurally created shaders through the usual mechanism to the engine core.

We had some interesting milestones, but the development of the abstract language proved very difficult (as it is supposed to be intuitive, flexible and easy to use). Also, we soon realized that such meta scripts would not only compile down to various shaders, but also require parts of code to be generated for the system CPU. Currently, the dev team is investigating possibilities of on-the-fly compilation of shaders and the respective x86 ASM that goes along with them. Basically, procedurally generated and JIT compiled render pipelines.
To some extents that kind of system was what I was thinking about when I first posted the material/shader thread and had a reply about using a similar system to DirectX FX files - Of course I was thinking about having to at run-time switch on every single parameter and I hadn''t even thought about vertex/fragment programs(my hardware didn''t support them at the time), however I quickly dismissed the idea when further discussion in that thread occured.

It would be interesting to see something like a watered down version of the renderman interface used as the scripts, obviously we don''t yet have the complexity on consumer level hardware to support the full set of shaders that renderman uses, but we can definitely simulate some limited subset of the system.

Certainly an idea to think about, Yann, but not something to be attempted by many I feel! It''s strange how little the differences are between CPU architecture and the associated compilation / optimisations and those that can be applied to graphics rendering.
I should have expressed my idea better as I knew that speed/hardware and development time wouldnt allow the sort of system you describe.

Anyway, each shader file has all the parameters at the start of it, along with the appropriate vertex/fragment program file, either as references to an external file or embedded in the shader file. So at engine startup time the shaders can be linked to effects and tested if they run on current hardware etc from the info here.

Now what I was meaning is the code that would be in the shaders main functions, such as entershader(), setShaderParams() etc.
Would be in this file in the form of a script (in my case LUA).
I can get benefits from this such as changing the shader when the engine is running (though this may be problematic for beginners (like me ;-))
The v/p programs should be able to support Cg and HLSL and asm but Cg is what I am mainly using just now.

What do you think of this, do you think it would work well?

Yes, it would work. You could even loose the whole linking between shaders and effects process and simply hardcode the calls to the lua scripts corresponding to startShader() etc... I suppose you could even define smoe utility functions which have standard operations defined such as enableVP(const char* name) and such like to simplify the setup calls and allow you some optimisations such as not setting the same state twice etc..

I feel the solution must be elegant and future proof (abstract) enough to easily handle new features like the topology shader, general memory IO usage from the DX Next specs. If it breaks free from the current vertexShader->PixelShader pipeline and allows you to describe how to setup the pipeline for eg. VertexShader->tessellation->VertexShader->PixelShader or someparts of it redirected to the memory eg. toposhader->temp_buffer temp_buffer->vs->ps. I.e. when such functionality arrives, the script will be able to support it with minimal changes to the language. The language itself should be able to support all sorts of "flow" bettween shaders, memory objects.
The shader system described in the original thread is fully capable of that. You already have the multiple stages to give some order of execution of the shaders, all that will simply happen is that the output of the shader will include a buffer with the new geometry, which can be picked up by a shader in a subsequent pass for rendering.

This topic is closed to new replies.

Advertisement