Jump to content

  • Log In with Google      Sign In   
  • Create Account

MJP

Member Since 29 Mar 2007
Offline Last Active Today, 04:12 PM

#5067755 Constant Buffer By Name in HLSL

Posted by MJP on 05 June 2013 - 08:11 PM

You can definitely get a constant buffer location by name. Just obtain the ID3D11ShaderReflection interface for a shader, and then call GetResourceBindingDescByName to get the info for your constant buffer. You can also enumerate all of the resource bindings with GetResourceBindingDesc to see what resources are bound to which slots.




#5067422 What tone-mapping technique are you using?

Posted by MJP on 04 June 2013 - 01:01 PM

Well if you're going with physically based stuff a filmic tonemapper: http://mynameismjp.wordpress.com/2010/04/30/a-closer-look-at-tone-mapping/

 

They're all, supposedly if the name means anything, based off the tonemapping for actual film, which deals obviously with physically base "real world" stuff anyway. There was this huge spiel with data, a pdf, I read months ago explaining exactly why film exposes in that way and how it suits real world scenarios, including real world under/over exposure clipping and light ranges and etc. If anyone knows what I'm babbling about and has it bookmarked then that's the most helpful thing I can think of. Real world light intensity of everything from a moonless night to the middle of a bright day and etc.

 

One of the specific I can remember was a reason for the toes on either end spreading out in an S shape, and that was to avoid sharp clipping in either over or under exposure, as well as ensuring there's still some saturation in both relatively dark shadows and bright highlights.

 

It sounds like you're talking about this presentation, which is from the SIGGRAPH 2010 course about color enhancement and rendering.




#5067209 Multiple textures with sampler

Posted by MJP on 03 June 2013 - 03:43 PM

No you can't do that. The only work around would be to pack multiple textures into a single texture.




#5066562 DirectX 11 Buffer Headache

Posted by MJP on 31 May 2013 - 06:29 PM

The old D9D9 way of dealing with resources was completely broken in terms of how GPU's actually work. When you put a resource in GPU memory it's no longer accessible to the CPU, since it's a completely different memory pool that's not accessible to userspace code. In order to support the old (broken) D3D9 semantics drivers had to do crazy things behind the scenes, and the D3D runtime usually had to keep a separate copy of the resource contents in CPU memory. Starting with D3D10 they cleaned all of this up in order to better reflect the way CPU's work, and to also force programs to use the "fast path" by default by not giving them traps to fall into that would cause performance degradation or excessive memory allocation by the runtime or driver. Part of this is that you can no longer just grab GPU resources on the CPU, and you have explicitly specify up-front what behavior you want from a resource.

That said, why would you ever need to read back vertex buffer data? If you've provided the data, then you surely already have access to that data and you can keep it around for later. You wouldn't be wasting any memory compared to the old D3D9 behavior or using a staging buffer, and it would be more efficient to boot.




#5066207 Sampling a texture in a domain shader [Cg]

Posted by MJP on 30 May 2013 - 12:49 PM

I don't really know Cg, but if tex2D is equivalent to what it was in older HLSL then you can't use it anything except for a pixel shader. This is because it uses screen-space gradients to automatically select the mip level, and gradients can only be computed in a pixel shader. Try using tex2Dlod, or whatever the Cg equivalent is for a function that lets you manually specify the mip level.




#5066018 Generating Cube Maps for IBL

Posted by MJP on 29 May 2013 - 08:09 PM

The general approach is to the take the input cubemap as if it contained radiance at each texel, and pre-integrate the radiance with some approximation of your BRDF. Unfortunately it's not possible to pre-integrate anything except for plain Phong without having to also parametrize on the view direction, so the approximation is not that great if you want a 1:1 ratio of cubemaps. Most people will use CubeMapGen to convolve with a phong-like lobe, using a lower specular power for each successive mip level. You can roll your own if you want, it's not terribly difficult. I actually made a compute shader integrator that we use in-house. Just make sure that you account for the non-uniform distribution of texels in a cubemap when you're integrating, otherwise the result will be incorrect.




#5065166 Question about redundant texture binds.

Posted by MJP on 27 May 2013 - 12:43 AM

I'm sure at some point there are redundancy checks, but Nvidia and AMD still recommend that you do the checking yourself if you want best performance.




#5065164 Rendering Frames Per Second

Posted by MJP on 27 May 2013 - 12:40 AM

That D3DPRESENT enum controls which VSYNC mode to use when presenting. if you use DEFAULT or ONE the GPU will wait until the next vertical refresh period to present, which effectively limits you to the refresh rate of the monitor (which is surely 60 in your case). The point of this is to prevent horizontal tearing.




#5065163 What semantic(s) to use in order to rendre in multiple targets ?

Posted by MJP on 27 May 2013 - 12:37 AM

In the future it would be helpful if you post the compilation errors, since it would help people to quickly figure out what's wrong with your code.

Your problem is that you're taking your "psOut" structure as an input parameter to your pixel shader, and SV_Target is an invalid input semantic for a pixel shader. Just remove it as a parameter, and declare it locally to your function:

 

 

psOut pixel_main(psIn IN)
{
    psOut OUT;
    OUT.color = getColor(IN) ; ;
    OUT.RTposition = IN.RTposition ;
    OUT.RTnormal = float4(IN.normal, 1.0) ;
    return OUT ;
}



#5064814 Locating possible memory leak in "Stalker" shaders

Posted by MJP on 25 May 2013 - 12:08 PM

I guess I should add that shaders themselves can't allocate anything, but it's possible that the engine allocates things based on what's in the shader. For instance the engine might inspect the shader to look at how many textures it uses, and allocate an array of pointers that get used for storing pointers to textures. However it's impossible to know about these sorts of things without knowing more about the engine, or seeing the engine code.




#5064698 Locating possible memory leak in "Stalker" shaders

Posted by MJP on 24 May 2013 - 11:59 PM

Shaders can't allocate memory, so there's no way for them to create a memory leak.




#5064697 Calculate bitangent in vertex vs pixel shader

Posted by MJP on 24 May 2013 - 11:37 PM

On the particular platform that I work with it's generally a win to minimize interpolants, so I've been going with the latter approach. I'm honestly not sure what Maya or the other DCC packages do, I've never taken a close look.




#5064281 DX11 - Depth Mapping - How?

Posted by MJP on 23 May 2013 - 03:51 PM

You want the second one: depthPosition.w / 1000.0




#5064275 creatInputLayout returns a NULL pointer

Posted by MJP on 23 May 2013 - 03:40 PM

Not all DXGI formats are supported for use in a vertex buffer. You need to look at the table entitled "Input assembler vertex buffer resources" from here to see which formats are supported. In your particular case the issue is that _SRGB formats aren't supported. You need to use DXGI_FORMAT_R8G8B8A8_UNORM instead, and then apply sRGB->Linear conversion manually in the shader.

Also you should consider turning on the debug device by passing
D3D11_CREATE_DEVICE_DEBUG when creating your device. When you do this, you'll get helpful error messages in your debugger output window whenever you do something wrong.
 




#5064269 DX11 - Depth Mapping - How?

Posted by MJP on 23 May 2013 - 03:16 PM

I'm not really sure what your exact problem is or what you're trying to accomplish here. However I can tell you that dividing post-perspective z by 25.0 is not going to give you anything meaningful. Normally you would divide by w in order to get the same [0, 1] depth value that's stored in the depth buffer. However this value isn't typically useful for visualizing, since it's non-linear. Instead you usually want to use your view-space z value (which is the w component of mul(position, projectionMatrix), AKA depthPosition.w) and divide it by your far-clip distance. This gives you a linear [0, 1] value.






PARTNERS