mancubit

Member
  • Content count

    143
  • Joined

  • Last visited

Community Reputation

536 Good

About mancubit

  • Rank
    Member
  1. regarding the books: if you want to get an introduction into graphics programming using dx11 get luna's book. if you want to get comfortable with the dx11 API get Practical Rendering and Computation with Direct3D 11. Both books are really good, but the first one is more focused on learning graphics programming, than learning the API. I think the second one would fit your needs better.
  2. Remeber state of pixel in HLSL.

    guess you could also solve it via stencil buffer. If pixel does not change, clip it, otherwise write one (or something else) in there. You can then use the stencil buffer to mask out only the changed/unchanged pixels.
  3. I am currently working on the shader effect system for my rendering engine and i am not quite sure how to properly design it. A shader effect in my engine describes the shader pipeline configuration for a single rendering effect (this basically means setting the right shaders for each stage and setting the correct shader parameters). What i want to achieve is a system with high flexibility without making it overly complex and unmaintainable. Having this said, I can't figure out a good way to handle shader parameters that exist in more than one shader but are different in its type. For instance lets say i have a single shader effect which combines a vertex shader which has an parameter "Color" as a vector3 and a pixel shader which also has a parameter "Color" but as a vector4. The problem is, it would be nice to set the parameters by using a parameter name, but what should i do in case of the example above? The solutions that came into my mind are the following:[list=1] [*]Do not allow a combination of shaders where shader parameters conflict (kind of unflexible) [*]Ignore shader parameter conflicts and just set the bytes that fit into the parameter / are provided by the application (maybe results in hard to find errors) [*]Set the parameters per shader and not for the whole shader effect (uncomfortable) [/list] Personally I am not a big fan of any of these solutions, but i think (1) and (2) could be ok. Setting the parameters for every single shader (3) feels a little bit too tiring and uncomfortable. I am interested in how you handled this problem in your code. Are there any good solutions i have not thought of? Thanks for your help!
  4. @all: thanks - everything you say makes perfect sense. I guess i will go with preprocessor solution then (it makes even the code easier) @L. Spiro: the link to your post is interesting, thanks for that too. @kunos: yes you are right, dynamic casts aren't necessary, static_casts would be sufficient (anyway i want to minimize casting at all)
  5. I am currently trying to build up an api agnostic rendering engine. I do this simply for fun and I hope to learn a lot from it, so its nothing really professional or anything, but should serve as a basis for rendering tryouts or maybe a game someday. The thing i have problems with is, how i should handle the borders between multiplattform and api specific code. I cant really find a way to avoid massive dynamic casts here. I know this may sound like premature optimization (which to a certain extend, this possible is) but as I said I want to gain experience and I dont think I have found the best possible solution yet - so I decided to ask the community [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] So lets take for example the shader system: I have a abstract base class called "Shader" which represents a single shader (vertex shader, pixel shader etc. ) and I have an abstract "Renderer" class which can set a specific shader by passing it an object of base class "Shader" like this: [source lang="cpp"]virtual void Renderer::SetVertexShader(Shader* shader) = 0;[/source] So lets imagine i have an api-specific shader (derived from Shader) called "ShaderDX11" and a corresponding renderer (derived from "Renderer") called "RendererDX11". RendererDX11 now implements the SetVertexShader method and performs the api-specific stuff to activate the shader. Now I cant figure out how i could prevent a dynamic cast here to access the object "ShaderDX11" because I only have a pointer to a "Shader" object. Basically I know that this can only be an object of type "ShaderDX11", yet I dont know how i could prevent an dynamic cast everytime I set a single shader. The thing that bothers me, is that I have to perform a dynamic cast for every single resource that interacts with api-specific code (buffers, textures, shaders, render states, etc.). Is it common practice to make massive use of dynamic casts here? Or do I just miss somthing here? Thanks for your help [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]
  6. Structure of classes in good Game Engine?

    I found the service locator pattern (as described here http://gameprogrammingpatterns.com/service-locator.html) a good alternative to the singleton. Its still very global but not that restrictive anymore.
  7. hm what about something like this? [CODE] std::vector<Combination> combinations; for (uint i=0; i < elements_max; ++i) for(unint j=i+2; j < elements_max; ++j) combinations.push_back(Combination(element[i], element[j])); [/CODE]
  8. you find the answers on the official homepage (but only for the first part - dont know if there are questions in the later parts of the book) see: http://www.d3dcoder.net/d3d9c.htm
  9. this can happen if you haven't generated mip maps and sample between them (as they might be black). You can force sampling the first mip map stage by using the tex2DLod method. This is just a guess as i think you are doing a fullscreen pass with the same resolution as the source texture (where it should sample the first mip map stage automatically) but its worth a try.
  10. Atmospheric scattering

    I have done something similar for my master thesis. Maybe this helps you out: http://www.gamedev.net/topic/619745-brunetons-atmospheric-scattering-demystified/
  11. missed that little detail too
  12. PCF seems fine to me, although i prefer to interpolate the samples based on the original texture sample, instead of just averaging them. Did you make sure that you have set your sampler to point-sampling? Using linear filtering for shadowmapping can often cause weird artifacts. A picture of the output would help.
  13. Poor shadow mapping results

    sorry for reposting - the second page is what might interest you: http://takinginitiative.net/2011/05/25/directx10-tutorial-10-shadow-mapping-part-2/
  14. Poor shadow mapping results

    i dont know if i understand you correctly, but did you blur the shadow map itself? This will not work as the shadow map only stores the depth of the scene as seen from the light source and not the shadow itself. The easiest solution to get softer shadows is PCF filtering. A good description can be found here: http://takinginitiative.net/2011/05/15/directx10-tutorial-10-shadow-mapping/ dont mind that this is for dx10, the theory is the same.
  15. Screen Space 'Psuedo' Lens Flare

    thanks for sharing your technique - will give it a try in the near future