• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

766 Good

About Spidey

  • Rank
  1. Yeah, it makes sense to have a wglContext, I just don't see why it requires a window to be setup and was hoping there was a way to create a gl context without creating a window. Just like you can create a d3d11 device without creating a window and compile all your shaders using d3dcompile.   If I must create a dummy window, is it safe to delete the hWnd etc after getting the addresses for the extensions ? I know the context must stay alive but I'd like to clean up all the other window stuff if possible.
  2. I'm trying to write a simple windows app which goes a bunch of glsl shaders and compiles them. From searching online it seems this requires querying the glCreateShader etc extensions which require calling wglGetProcessAdress to get these extensions (I'm trying to target gl 3.3). I tried doing this but the call always fails (since I don't have a wglContext and further more a window setup).   Is there any way to get these extensions without creating a window ? I don't need to do any rendering at all (ever), just compile a bunch of shaders and check the output for errors.   From what I've found so far, I haven't seen a way of doing this without creating a window first ? is there any way to compile shaders offline or quick way to setup a dummy context without doing all the window setup ? Window setup seems a bit hacky just to require shader compilation.   Thanks!
  3. actually its probably not a callback blowing the stack as I would see it in the callstack, still could be a stomp though.
  4. Thanks for the reply mhagain. I did take a quick look and the code around the device creation looked ok, but I'll check again ( I also tried increasing the thread stack just as a test but I still got the crash, so I was thinking something might just be blowing the stack somewhere (infinite recursion, or huge allocation on the stack) ). Yeah, I'm assuming the fault is in our code and not in the drivers. I got the sample app to work correctly, however I haven't tried spawning a thread from a native dll called from managed code and creating a device from there yet.   I was curious if the CreateDevice function might have some callbacks into user code ? If so, what would these callbacks be. for eg: the d3dxeffect system has user callbacks which get called from inside the d3d dll's. Maybe create device is calling such a callback which is causing the stack overflow. If so what kind of usage would trigger a callback from inside CreateDevice ? I can't step into this function so I'm not sure how to find out.
  5. Yeah, that's what I'm assuming. Software VP would mean vertex shader emulated on the CPU whereas mixed picks hardware if it can otherwise fall backs to software ?   Maybe that's why it fixed my crash, by delaying some initialization to happen sometime later which causes the crash to not happen.
  6. Thanks for the suggestion. Yup, just tried it: D3DCAPS9 Caps; sys->GetDeviceCaps(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, &Caps); if( Caps.DevCaps & D3DDEVCAPS_HWTRANSFORMANDLIGHT )      return true return false; returns true.   What is D3DCREATE_HARDWARE_VERTEXPROCESSING anyway ? Does this mean the entire geometry stage (vertex fetch, vertex shader etc) runs in software or just the old fixed function stages ? I can even get it to work with D3DCREATE_MIXED_VERTEXPROCESSING which makes the CPU run a lot faster so I'm assuming its doing all the heavy lifting on the gpu. I just don't know why its crashing with D3DCREATE_HARDWARE_VERTEXPROCESSING. 
  7. Hi,   I'm seeing a crash inside CreateDevice in an old code base when initializing a d3d device with the D3DCREATE_HARDWARE_VERTEXPROCESSING flag. The d3d initialization is contained in its own thread which is contained in a native dll being called from managed code. I am able to run with D3DCREATE_SOFTWARE_VERTEXPROCESSING or D3DCREATE_MIXED_VERTEXPROCESSING but D3DCREATE_HARDWARE_VERTEXPROCESSING always crashes inside CreateDevice. These flags are always OR'd with D3DCREATE_MULTITHREADED.   There are no error messages returned as the code crashes inside the function. Sometimes in gdi32.dll (and a few times in nvd3dum.dll). It is always a stack overflow exception caught by a chckstck in the thread running the dll. I don't have source for these so don't know whats going on. I'm on the latest NVIDIA drivers and have tried both debug and release d3d9 runtimes (Note: This used to work on older/last year's drivers).   I know the card (GTX 460 SE) supports hardware vertex processing and I am able to get the code with the same creation parameters to work inside the d3d sample demo app. I tried increasing the thread stack size from the default 1mb on windows but this had no effect on the crash either.   Anyone seen anything like this before or know what could be causing this ? or anything else I can do to track the problem ?   Thanks!  
  8. Definitely still the best real time graphics book out there IMO. I haven't heard anything about a 4th ed yet but 3rd is definitely worth buying though. 
  9. Does nsight have a stand alone version ? I thought it was integrated into visual studio which would require the pro version as express doesn't support add ons ? I've been using it myself recently and although its better than nothing, I still prefer pix a graphics debugger. Intel GPA is another alternative which might give you some info.   As for pix crashing, can you attach a debugger to the process to see why it crashes ? I had a pix crash last week which ended being causes by the app trying to create a SW or REF device. Apparently pix only works with a HAL device (This was d3d9 of course). 
  10. Deferred rendering issue

    If you have pix, then you can output your corrupted g-buffer to the back buffer and run a single frame capture on it. Then run the "debug this pixel" feature on the black pixels by right clicking on them. That will let you step through the shader to see what is wrong.
  11. Deferred rendering issue

    I've never had issues with precision in vertex shaders so don't think that would be the issue. Try rendering your scene in wireframe, it might bring some geometry issues to light.
  12. The main reason was material flexibility without having to store additional parameters to the GBuffer, but thinking about this more based on what you said, it doesn't seem that much more useful since the BRDF has already been applied as you said. Basically I was just wondering if I were to switch, if there would be any discrepancies in the output. I probably won't switch unless I find a good reason.
  13. Well, now that you put it like that, I can see that they are in fact identical :) (as long as we keep diffuse and specular seperate)   I guess I just got tripped up by thinking in terms of lights and passes and missed the simple equation.   Thanks!
  14. Splitting diffuse + specular seems like a good idea. Wouldn't that still produce a different output than deferred shading though. For example, lets just take diffuse into account ignoring specular.   my current deferred shading approach (which I believe is how everyone does it ?):   float4 color = 0,0,0,0; for each light   color += surfaceColor * (NdotL * lightDiffuseColor); freameBuffer = color   deferred lighting approach:   lightAccumBuffer = 0,0,0,0 for each light   lightAccumBuffer += (NdotL * lightDiffuseColor);   scene render pass    framebuffer = lightAccumBuffer * surfaceColor   In the examples above, the surfaceColor is multiplied into the lighting contribution for each light and that result is added to the frame buffer. Whereas in deferred lighting, the surface color is only multiplied in once. Wouldn't this produce a different result ?
  15. I was considering switching my engine from a deferred shading to a light prepass (deferred lighting) approach. From my initial readings on deferred lighting, it seems that this method will not generate the same ouput as deferred shading since we are not taking into account the diffuse + specular colors of the materials during the light buffer generation. So if an object is affected by multiple lights, it will only apply the surface color to the output once vs the deferred shading approach which multiplies in the surface color for each light (I am talking about the phong model specifically).    I assume that to generate the same output as before, I would have to modify the light properties for each light to generate the same output or modify the deferred shading implementation to only apply the surface color once. Another option is to add surface data to the g-buffer but that brings us back to deferred shading. In my current implementation I can switch between deferred and forward shading and the output is about the same, however this will no longer be the case with deferred lighting.   Is there something I am missing or is this indeed the case ? how are other engines which have switched to deferred lighting handling this ? Are you just ignoring the differences and keeping with one lighting method ? or applying some function in the code to modify the light properties in a prepass renderer. I would assume this transition would be a bigger issue in large projects with multiple scenes and lights.    
  • Advertisement