Tree Penguin

Members
  • Content count

    1529
  • Joined

  • Last visited

Community Reputation

262 Neutral

About Tree Penguin

  • Rank
    Contributor
  1. Never mind, really stupid mistake, I was using a read only depth view (because someone made that the default view [img]http://public.gamedev.net/public/style_emoticons/default/dry.gif[/img]).
  2. [quote name='kauna' timestamp='1319564607' post='4876842']Does the final result has to be a depth buffer? Couldn't you just output to a floating point precision render target?[/quote] Unfortunately yes, I am doing a low resolution pass for which I need depth-tested stencil buffer masking. [quote name='MJP' timestamp='1319567681' post='4876859']You really don't have to do anything special, just output a [0,1] value. If you're doing something majorly wrong then the runtime will usually yell at you in the debug output (assuming you created the device with the DEBUG flag). I assume that you've verified in PIX that the output depth buffer isn't getting the values that you expect?[/quote] That's what I thought, and why I'm so surprised I can't get it to work. I checked pix (and GPUPerfHud). The depth buffer values stay the same, the value I write to oDepth when I debug the pixelshader is correct (though oDepth seems to be listed as Name TBD, Value (_,_,_,_) and Type TBD in the Registers tab). I enabled DX11 debug, and I get no errors or (related) warnings.
  3. Already read it, I am doing none of that. Also, the shader is nothing fancy, it's 4 depth texture reads, min and max instructions, and then outputs the same value to SV_Target's R component and SV_Depth. I've even tried just outputting the texture fetch result, which still didn't work (even though the color buffer contained the correct value).
  4. I'm trying to render to a depth buffer (D24S8) using SV_Depth in de pixel shader (for doing a selective downsample), but the value isn't written (I'm 100% sure of this). The value itself is correct (I output it to a color buffer as well). Depth writes are enabled, depth testing is disabled. Enabling / disabling stencil doesn't seem to matter. Does anyone know of any limitations that might cause this? Or anything else that might go wrong? If there's some limitation are there any alternatives for doing (selective) downsamples of a depth buffer? I need a stencil buffer with the downsampled result. Thanks
  5. I am creating and releasing a lot of textures, but something appears to go wrong because I get out of memory errors from DX after a while. I made some simple code, just to check what's going on, that i execute every frame: window->g_pd3dDevice->CreateTexture2D( &Desc, NULL, &window->g_dummyTex ); if(window->g_dummyTex) { hr = window->g_dummyTex->Release(); } window->g_dummyTex = NULL; Using this code for a few hundred frames results in out of memory errors (and windows telling me that i should save stuff because my memory is low). I checked Release's return value but that's always OK. Does anyone have an idea on what might cause this (or what I'm doing wrong)? And I know it's always better to not create and release textures a lot, but in this case that would be difficult, and I would assume that this should just work...
  6. DX11 DX11 - DX9.3 feature level with SM3?

    [quote name='dblack' timestamp='1306946896' post='4818325'] You could target D3D10, I imagine a lot of people who would have bought the mostly high end cards which supported SM3 well would have upgraded to DX10+ or will do soon... David [/quote] Yeah I'm thinking about that, but I've had requests for DX9 hw support, I guess those people will just have to be happy with less fancyness...
  7. DX11 DX11 - DX9.3 feature level with SM3?

    Oops I see, thanks. That kind of makes the feature levels useless for me though .
  8. I am trying to get some shaders to work in DX11 with the DX9.3 feature level, which I assumed was DX9 with ShaderModel 3. However, I get the following errors compiling the shader: error X5608: Compiled shader code uses too many arithmetic instruction slots (712). Max. allowed by the target (ps_2_x) is 512. (1,1): error X5609: Compiled shader code uses too many instruction slots (816). Max. allowed by the target (ps_2_x) is 512. So I guess it's compiling to an SM2 pixelshader instead of SM3? I use .fx files with technique11 techniques, ps_4_0_level_9_3 as profile, fx_5_0 for the effect compile profile, and I double checked the device's feature level. Has anynone got an idea of what's going wrong here? Edit: It was also complaining about a texture having mipmaps while not being a power of 2 in size, which is supported for the 9.3 feature level, according to msdn.. Is there any way that DX might be telling me the feature level is 9.3 while it is actually 9.2 or 9.1?
  9. DX11 Is rendering without a VB valid in DX11?

    Never mind, I looked in the wrong places, I found the answer in the msdn page for the ID3D11DeviceContext::Draw method: [quote]The vertex data for a draw call normally comes from a vertex buffer that is bound to the pipeline. However, you could also provide the vertex data from a shader that has vertex data marked with the SV_VertexId system-value semantic.[/quote]
  10. shadow mapping

    First of all, your depth seems to be inverted, so you are probably not outputting the correct values. (Black should be near the camera, but your object is all white.) Second, 100000.0f is a [b]very [/b]large value for a farplane, so there's a good change anything near to the camera will (appear to) be the same color, especially in the capture which is only 8bit. You could try a lower farplane value like 1000.0, 100.0 (or an extremely low value like 10.0) and see if that works any better.
  11. Using DX11, I use a NULL Vertex Buffer and Vertex Layout to render, and the SV_VertexID inside the vertex shader as only input, is that OK? I tested this on an NVIDIA card with the latest drivers and it seems to work fine, and I dont get any errors/warnings/info from the DX debug layer either. However, all MSDN topics and other pages I can find seem to assume you do pass a valid VB (and note that it [b]must[/b] be created with D3D11_BIND_VERTEX_BUFFER etc.) so I still have my doubts... Can anyone confirm this is indeed OK to do?
  12. Is it possible to multiply two (lighting) functions, converted to SH, by using only their SH coefficients? All methods I have found so far - aside from multiplying the sampled result of the SH equations, and then converting it back to SH coefficients - seem to need one of the original functions, but all I will have is just two lists of coefficients. What other ways are there to do this? Sorry if this question is obvious, but calculus is not my best area, so I don't understand the formulas in a lot of explanations that well. EDIT: don't know if it's standard procedure, but the result values of both functions (converted from SH back to a single value) are clamped to be at least 0.0, before they are multiplied together (one function is a mask for the other function). [Edited by - Tree Penguin on September 23, 2010 11:00:02 AM]
  13. I'd like to render part of a scene using a normal depth target (less-equal tested, and written to) and a second depth target (greater-than tested) to have a max depth as well as a min depth. Is this in any way possible in the newer DX versions (10/11), without simply sampling the second depth texture and testing it manually? ps. The second depth buffer does not need to be written to.
  14. Internally, the shader units use floating point values to do calculations (128bpp for rgba), so that's why you usually see floats in shaders. When you return the float4 from the main function, it is converted to match your rendertarget's format (usually 32bpp). 0.0f will become black, and 1.0f will become white (255).
  15. GLSL : find global min. and max in texture

    Quote:Original post by taby I suppose you could scan through the entire texture using a sampler2D from within the shader, but that would be a ridiculously huge waste of GPU time. For postprocessing (doing hdr adjustments etc) that's a good way to do it. The shader would be heavy, but you only run it for one pixel, so it's not that bad.