Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 30 Oct 2009
Offline Last Active Yesterday, 10:33 PM

#5163551 Screen-Space Subsurface Scattering artifacts (help)

Posted by allingm on 28 June 2014 - 07:55 PM

It's possible you are having depth precision issues.  If you tweak your near and far plane does the banding change size?  If this is a problem you could possibly take a different approach and only change your blur width based on the first sample.  I know the original Jimenez paper did a ddx and ddy calculation on the first sample of depth to figure out the slope.



Oh, also, just for testing, try sampling the alpha of your specular map on each blur sample and reject the color if the resulting sample is greater than 0.  (I'm thinking my previous comment is wrong now, but I'll keep it just in case.)

#5077281 Fast exp2() function in shader

Posted by allingm on 13 July 2013 - 01:16 AM


The only way to find out the answer to your question is to test performance.  You can also get an idea by looking at the token assembly of your shader.


Also, integers operations were emulated with floats in DirectX 9, but 10 requires full integer support.  I don't see how they could simulate it with a float, and if they did you would notice a huge performance impact.

#5072803 Environment reflection & fresnel correct ?

Posted by allingm on 25 June 2013 - 01:02 PM

You can also find more on specular occlusion here:



They are near the end.

#5044477 pimpl for renderer class

Posted by allingm on 18 March 2013 - 11:38 PM

Would this work for you?


public header



class CTexture; // Use forward declaration to avoid exposing the implementation.


class Renderer



  // All operations on this texture will happen through the renderer.

  void DoSomethingToTexture(CTexture* tex);



private cpp/header



class CTexture


  // Do your platform specific stuff in the cpp or private header.  This is where "Renderer" is implemented.

  LowLevelRenderer stuff;



You can avoid the clutter this way, but you have to gaurentee that the user (public) will only ever pass around the pointer.  If you want to be able to use the texture class direcitly you will have to use Pimpl or Virtuals.  Personally I would use virtuals as the cost compared to the amount of work is low.  Virtuals get expensive when the cost compared to the amount of work is high.  For instance, you'll probably have a few hundred expensive draw calls, but particles might have a million low cost operations.  So, I wouldn't use virtuals on particles.  Of course this all depends on your platform.

#5041957 How to abstract OpenGL or DirectX specific things from classes specific to re...

Posted by allingm on 11 March 2013 - 12:40 PM

I would check out Hummus' "Framework 3". http://www.humus.name/index.php?page=3D

#4990607 Tonemapping Formula Help (Math Help)

Posted by allingm on 15 October 2012 - 09:54 PM

I finally found the book. You can find all the correct formulas here:


#4983458 Multiply RGB by luminance or working in CIE ? Which is more correct ?

Posted by allingm on 24 September 2012 - 11:25 PM

The dot product you are talking about is actually doing the RGB to Yxy conversion, but because all you care about is the luminance it simplifies down to a dot product. However, if you want to properly convert the whole color, RGB, to Yxy and then scale it and then convert it back it is a much more intensive operation. So, both are correct. One contains all the information necessary for RGB -> Yxy -> RGB and one only contains enough for RGB -> Y.

I found the equations here: http://stackoverflow.com/questions/7104034/yxy-to-rgb-conversion If you do the algebra on the conversion functions you'll find what I said to be true.

#4979173 HDR Help

Posted by allingm on 11 September 2012 - 10:43 PM

I setup my render target texture as D3DFMT_A16B16G16R16F. How do I actually get those extra bits of color? If I render my scene as-is and then render that texture out to the screen, it's exactly the same.
Question 1:
The HDR_Pipeline actually has a pixel shader that multiplies each RGB by a scalar factor. Is this how HDR is done? So, every object (or really every shader) in my scene now has to multiply it's pixel RGB by some magical scalar factor so I can later use that for my luminance calculation? Is this how everyone does it, take the result RGB from a model and multiply it by some scalar to get over 1.0 results?

So, LDR (low dynamic range) is when the light values are in the range [0, 1] while HDR (high dynamic range) is when light values are in the range [0, infinity). The D3DFMT_A16B16G16R16F texture will hold [-infinity, +infinity], so the texture format doesn't need any work to hold extra data. What needs to change is two things. First, the lights in your scene must actually add up to something past 1 (or start at a value greater than 1). Second, you must realize that no matter what your render target supports, the computer screen only supports [0, 1]. You need to map [0, infinity] to [0, 1]. This conversion is called "tone mapping".

MJP has a really greate demo here: http://mynameismjp.w...ss.com/2010/04/

There is also a great book on GameDev that has tons of information, but I can't seem to find it at the moment.

Question 2:
Is there anyway faster to get the average or log luminance value other than downsampling? If not, do people generally start at 256x256...

Not really, you can start sampling into any size texture, but anything too small will not be 100% accurate. MJP did also have a demo that did this in a compute shader, but I'm not sure that is exactly what you're asking for. ( http://mynameismjp.w...ss.com/2011/08/ )

#4913722 Looking for a name of this visual effect

Posted by allingm on 16 February 2012 - 12:50 PM

I don't know the name of the technique, and I don't think it really has a name. I think the name of the artifact is screen distortion, but that is a bit vague. Is there a reason you need the name? If you want to know how to implement the technique we can certainly explain it. I'll give it a try here just in case that is what you want:

Do a post processing blur technique. Instead of blurring the whole screen simply blur along a line. Find the distance to the line and then magnify the effect the closer it is. Also, the line should have a direction so you blur in that direction. To get a zigzag effect supply multiple lines in different directions. You can move the lines up and down the screen as necessary.

Does this answer your question?

#4806634 How are the coordinate axes drawn in this video?

Posted by allingm on 04 May 2011 - 04:47 PM

Draw three lines in model space as the axis you would expect.

Line 0: <1,0,0>, <0,0,0>
Line 1: <0,1,0>, <0,0,0>
Line 2: <0,0,1>, <0,0,0>

Scale them to the appropriate size. Rotate them based on the camera rotation. Position them in front of the camera, but do not transform them from world to view space. Instead skip the world transform and assume they are in view space already. Also, turn off depth writes so it overlaps anything else that renders (make it essentially part of the UI).

You're are essentially rendering UI, but instead of using an orthographic projection you are using a perspective projection.

#4802921 Best place to start shader programming

Posted by allingm on 25 April 2011 - 09:30 PM

I've only ever programmed the fixed function pipeline and want to start playing with shaders. Where, in terms of sites, books, tools and language, is a good place to start?

I started my learning with DirectX. You can find the best tutorial for shaders in the following books:
http://www.amazon.co...03786385&sr=8-2 (DirectX 9)
http://www.amazon.co...03786385&sr=8-1 (DirectX 10)

You can also find a tutorial on the game dev book:

Assume I have good knowledge of 3d programming but no knowledge of shader technology at all, how it works, etc. What are the things I should know?

If you want a run down of the pipeline you can probably find it in the books above, but you can also find it in this awesome book:

What are the main shader languages and what are the differences?

There are mainly 2 "shader languages", HLSL and GLSL. HLSL works with DirectX and GLSL with OpenGL. The differences between these are actually quite minimal. The place where they really differ is the different versions within those. I'm most familiar with HLSL, so I'll break it down for you. You can find the different version numbers (for some reason 5.0 isn't included) here: http://en.wikipedia....Shader_Language While the page lists the hardware differences the main things you need to know are:

v 3.0 - DirectX 9
Pixel and vertex shaders
v 4.0 - DirectX 10
Adds the geometry shader
v 5.0 - DirectX 11
Adds the compute shader and hull shaders for tessellation

Do I have to write different shaders for OpenGL and DirectX?

Yes, but the code is very similar.

I'd also be interested to know what level of shader programming is currently 'compatible' with most of the PC hardware base just now.

I would say if you're shooting for the greatest level of compatibility I would shoot for DirectX 9. If you want the most shaders you can learn I would go with DirectX 11 (requires Windows Vista or 7). I would say use DirectX 11 for making graphics demos and learning, and DirectX 9 for making a game other people can play.