Jump to content

  • Log In with Google      Sign In   
  • Create Account


mancubit

Member Since 08 Jul 2008
Offline Last Active Yesterday, 07:15 AM
-----

Topics I've Started

Shader effect system design

12 July 2012 - 02:22 PM

I am currently working on the shader effect system for my rendering engine and i am not quite sure how to properly design it. A shader effect in my engine describes the shader pipeline configuration for a single rendering effect (this basically means setting the right shaders for each stage and setting the correct shader parameters). What i want to achieve is a system with high flexibility without making it overly complex and unmaintainable.

Having this said, I can't figure out a good way to handle shader parameters that exist in more than one shader but are different in its type. For instance lets say i have a single shader effect which combines a vertex shader which has an parameter "Color" as a vector3 and a pixel shader which also has a parameter "Color" but as a vector4.

The problem is, it would be nice to set the parameters by using a parameter name, but what should i do in case of the example above?

The solutions that came into my mind are the following:
  • Do not allow a combination of shaders where shader parameters conflict (kind of unflexible)
  • Ignore shader parameter conflicts and just set the bytes that fit into the parameter / are provided by the application (maybe results in hard to find errors)
  • Set the parameters per shader and not for the whole shader effect (uncomfortable)

Personally I am not a big fan of any of these solutions, but i think (1) and (2) could be ok. Setting the parameters for every single shader (3) feels a little bit too tiring and uncomfortable.

I am interested in how you handled this problem in your code. Are there any good solutions i have not thought of?

Thanks for your help!

API agnostic systems without massive dynamic casts - possible?

04 July 2012 - 03:05 PM

I am currently trying to build up an api agnostic rendering engine. I do this simply for fun and I hope to learn a lot from it, so its nothing really professional or anything, but should serve as a basis for rendering tryouts or maybe a game someday.

The thing i have problems with is, how i should handle the borders between multiplattform and api specific code. I cant really find a way to avoid massive dynamic casts here. I know this may sound like premature optimization (which to a certain extend, this possible is) but as I said I want to gain experience and I dont think I have found the best possible solution yet - so I decided to ask the community Posted Image


So lets take for example the shader system:

I have a abstract base class called "Shader" which represents a single shader (vertex shader, pixel shader etc. ) and I have an abstract "Renderer" class which can set a specific shader by passing it an object of base class "Shader" like this:
[source lang="cpp"]virtual void Renderer::SetVertexShader(Shader* shader) = 0;[/source]

So lets imagine i have an api-specific shader (derived from Shader) called "ShaderDX11" and a corresponding renderer (derived from "Renderer") called "RendererDX11". RendererDX11 now implements the SetVertexShader method and performs the api-specific stuff to activate the shader.

Now I cant figure out how i could prevent a dynamic cast here to access the object "ShaderDX11" because I only have a pointer to a "Shader" object. Basically I know that this can only be an object of type "ShaderDX11", yet I dont know how i could prevent an dynamic cast everytime I set a single shader.

The thing that bothers me, is that I have to perform a dynamic cast for every single resource that interacts with api-specific code (buffers, textures, shaders, render states, etc.). Is it common practice to make massive use of dynamic casts here? Or do I just miss somthing here?

Thanks for your help Posted Image

Bruneton's atmospheric scattering demystified

04 February 2012 - 09:11 AM

I decided to take a closer look at atmospheric scattering in my master thesis. I guess everyone who is interested in this kind of topic once stumbled over Bruneton's precomputed atmospheric scattering model (paper found here), as it is considered the most accurate and realistic scattering model to date. While the paper itself is really good, its a bit short and only introduces the general idea (as most papers do). Although, Bruneton provides the associated source code on his homepage, many people (including me) had problems understanding how it really works. I dont know how many hours (or even days) I was sitting in front of some equations trying to figure out what they are supposed to do.

Fortunately, many people on this forum helped me in understanding the code and therefore I want to give something back to the community by sharing my master thesis which is called "Deferred Rending of Planetary Terrains with Accurate Atmospheres". It can be found on my homepage (direct link can be found here).

In my thesis I was really trying to explain all the tricky parts of Bruneton's scattering model in an "easy" way and to create a document, I wished to have back then. I guess this is also the reason, why it reads almost like an tutorial, rather than an academic work.

In this way I want to thank again the community for helping me during my studies. I really hope my thesis will prove useful to many of you. Last but not least, some screenshots of my results (video can be found here)

Attached File  atmospheric01.jpg   53.73KB   75 downloads
Attached File  atmospheric04.jpg   26.67KB   74 downloads
Attached File  atmospheric05.jpg   34.53KB   76 downloads

Depth of Field (GPU Gems 3) Problems

27 October 2011 - 11:53 AM

I am currently trying to implement the depth of field algorithm as described in gpu gems 3 (link)

I am having some difficulties to get it running. My biggest problem is finding suitable parameters to test it out properly.

In the downsample shader (1. step):
dofEqWorld - what exactly is stored in this vector?
its used in the equation:

sceneCoc = saturate( dofEqWorld.x * depth + dofEqWorld.y ); 

according to its usage i think its the cocScale and cocBias value as described here but i am not sure about this (it also uses abs instead of saturate)

if this is the case, what are usual values for aperture and focallength? and should the depth value in this equation be provided as view-depth? or normalized depth? and should the depth value be linear?

there is also a similar vector used in the last step:

 farCoc = saturate( dofEqFar.x * depth + dofEqFar.y );

i guess these would be the same values, right?

hopefully someone can answer these questions, because i kept playing around with values but never get any reasonable results..

Frustum & Reconstructing Position with Depth Map

01 August 2011 - 02:06 AM

I know this has been discussed several times, but i still have a question to ask ;)
i have also read MJPs posting on this topic: http://mynameismjp.w...ion-from-depth/ but he doesn't mention the near plane and i get wrong results with this :(

at first i want to say that at the moment everything works fine, but i dont know if there is a better solution to my approach - so here you are:

i store my depth linearly by multiplying the z-value before the perspective divide with the w component and the inverse far plane
like:

position.z = position.z * position.w * 1/farplane;

then reconstruction is done by obtaining the vectors to the near and far plane of the view frustum. So I do in the vertex shader something like

ouput.NearToFar = PositionFarPlane - PositionNearPlane;
output.CameraToNear = PositionNearPlane - CameraPos;

where PositionFarPlane and PositionNearPlane are initialized with the proper FrustumCorners for the given vertex of the full screen quad.

in the corresponding pixel shader i do this

WorldPos = CameraPos + input.CameraToNear + DepthValue * input.NearToFar;

so basically this seems to work quite fine but i ask myself if this is maybe circuitous. Are there any better solutions to doing this?

thanks for your help!

PARTNERS