Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 18 Jul 2009
Offline Last Active Dec 16 2015 08:58 PM

Topics I've Started

Compile Shaders/Load Extensions without setting up a window

17 September 2013 - 12:48 AM

I'm trying to write a simple windows app which goes a bunch of glsl shaders and compiles them. From searching online it seems this requires querying the glCreateShader etc extensions which require calling wglGetProcessAdress to get these extensions (I'm trying to target gl 3.3). I tried doing this but the call always fails (since I don't have a wglContext and further more a window setup).


Is there any way to get these extensions without creating a window ? I don't need to do any rendering at all (ever), just compile a bunch of shaders and check the output for errors.


From what I've found so far, I haven't seen a way of doing this without creating a window first ? is there any way to compile shaders offline or quick way to setup a dummy context without doing all the window setup ? Window setup seems a bit hacky just to require shader compilation.




12 July 2013 - 01:10 AM



I'm seeing a crash inside CreateDevice in an old code base when initializing a d3d device with the D3DCREATE_HARDWARE_VERTEXPROCESSING flag. The d3d initialization is contained in its own thread which is contained in a native dll being called from managed code. I am able to run with D3DCREATE_SOFTWARE_VERTEXPROCESSING or D3DCREATE_MIXED_VERTEXPROCESSING but D3DCREATE_HARDWARE_VERTEXPROCESSING always crashes inside CreateDevice. These flags are always OR'd with D3DCREATE_MULTITHREADED.


There are no error messages returned as the code crashes inside the function. Sometimes in gdi32.dll (and a few times in nvd3dum.dll). It is always a stack overflow exception caught by a chckstck in the thread running the dll. I don't have source for these so don't know whats going on. I'm on the latest NVIDIA drivers and have tried both debug and release d3d9 runtimes (Note: This used to work on older/last year's drivers).


I know the card (GTX 460 SE) supports hardware vertex processing and I am able to get the code with the same creation parameters to work inside the d3d sample demo app. I tried increasing the thread stack size from the default 1mb on windows but this had no effect on the crash either.


Anyone seen anything like this before or know what could be causing this ? or anything else I can do to track the problem ?




Light Prepass output color differences

17 March 2013 - 10:09 PM

I was considering switching my engine from a deferred shading to a light prepass (deferred lighting) approach. From my initial readings on deferred lighting, it seems that this method will not generate the same ouput as deferred shading since we are not taking into account the diffuse + specular colors of the materials during the light buffer generation. So if an object is affected by multiple lights, it will only apply the surface color to the output once vs the deferred shading approach which multiplies in the surface color for each light (I am talking about the phong model specifically). 


I assume that to generate the same output as before, I would have to modify the light properties for each light to generate the same output or modify the deferred shading implementation to only apply the surface color once. Another option is to add surface data to the g-buffer but that brings us back to deferred shading. In my current implementation I can switch between deferred and forward shading and the output is about the same, however this will no longer be the case with deferred lighting.


Is there something I am missing or is this indeed the case ? how are other engines which have switched to deferred lighting handling this ? Are you just ignoring the differences and keeping with one lighting method ? or applying some function in the code to modify the light properties in a prepass renderer. I would assume this transition would be a bigger issue in large projects with multiple scenes and lights.



Reconstructing WorldPos from Depth Texture/Projection question

13 March 2013 - 02:17 AM

I'm trying to figure out how to reconstruct a world space position from a depth texture sample (stored as z/w) from a pixel shader. I eventually plan to move to linear depth but I'm trying to get this to work with non linear depth first. I mostly understand everything, I'm just confused about this one bit.


I've been reading online and the general way to convert a depth value to a world position seems to be something like this:


float depth =  depthTex.Sample(defaultSampler, IN.TexCoords).r; //sample depth - z over w

float4 pixelPos = float4(x,y, depth, 1.0f); //store as pixel pos (x/w, y/w, z/w, 1.0f)

float4 worldProjPos = mul ( pixelPos, invWorldViewProj ); //apply inverse world view proj mat to bring into world space


float3 worldPos = worldProjPos.xyz / worldProjPos.w; // <--- 


My question is about the last line. 


Why do we divide by the w coordinate during unprojection ? I know we do a perspective divide when going from world space to projection/clip space, but why does the opposite transform which undoes the projection require another divide. I thought the 'w' was already in the denominator, so wouldn't we need to multiply by 'w' ?


I guess I'm just confused about how projection matrices work (never fully understood them), any links which explain this or tips on what I'm missing ? That's the only thing I'm confused about. 



Fake 3D in 2D

26 November 2010 - 03:38 PM

Hi I'm writing a top down 2D tennis game using sprites and bitmaps. Currently it resembles pong but I'd like move towards a more 3D looking version. I was wondering if anyone knows any tricks to make 2D sprites appear to have 3D depth without actually moving towards a 3D coordinate system etc ? I just want the ball to not look as flat as does right now.

I'm aiming for something like this: