Jump to content

  • Log In with Google      Sign In   
  • Create Account

MJP

Member Since 29 Mar 2007
Offline Last Active Today, 12:44 AM

#5072099 Using D3D9 Functions and, HLSL

Posted by MJP on 22 June 2013 - 05:32 PM

Yes, you can absolutely do that. However the D3DX9 mesh loading functions require a D3D9 device, so you will need to create one in addition to your D3D11 device.




#5071611 Better to have separate shaders for each graphical option, or pass constants...

Posted by MJP on 20 June 2013 - 04:39 PM

Like anything else, the correct choice depends on a few things. Generating seperate shaders will *always* result in more efficient assembly being generated when compared to branching on a value from a constant buffer. Statically disabling a feature allows the compiler to optimize away any calculations and texture fetches that would be needed for that feature, which results in a more efficient shader. Branching on the other hand will allow the GPU to skip executing all of the code in the branch, but there will still be performance penalties from having the branch itself. Also it won't be able to optimize away the code inside the branch, which can increase register usage.

However there are downsides to using seperate shaders. For instance, you have to compile and load more shaders. The number of shaders can explode once you add more than a few features that can all be turned on or off. Also you have to switch shaders more often, which can result in higher CPU overhead and can also impact GPU efficiency by causing pipeline flushes.

 

For your particular case, shadows are probably a good fit for having a seperate shader. This is because shadows tend to be heavy in terms of GPU performance due to multiple texture fetches, so the performance gain is probably worth it.




#5071338 GPU particles

Posted by MJP on 19 June 2013 - 10:31 PM

Yeah the point->quad expansion has special-case handling in GPU's because it's so common. If you really want to avoid GS you can also use instancing to accomplish the same thing.




#5071337 Optimized deferred lighting....algorithm question

Posted by MJP on 19 June 2013 - 10:29 PM

Why don't you just use additive blending to combine the results of subsequent lighting passes?




#5070603 The Pixel Shader expects a Render Target View

Posted by MJP on 17 June 2013 - 04:24 PM

That warning means your pixel shader is trying to write out to SV_Target1, but you have a NULL render target view bound to the device context for slot 1. It won't actually cause a problem since the write to SV_Target1 will just be ignored, but you will be wasting a little bit of performance.




#5070483 D3D9 64-bit debug runtime

Posted by MJP on 17 June 2013 - 11:54 AM

There was a Windows 7 platform update that updated D3D components, and broke a few things (like PIX). To get the debug runtimes to work you need to either install the Windows 8 SDK to get the new debug DLL's, or you need to uninstall the platform update.




#5070041 low precision formats for vertex data?

Posted by MJP on 15 June 2013 - 03:09 PM

I can't say that I have ever observed such behavior on any hardware that I've worked on extensively, save for one console GPU that really liked fetching 32-byte vertex chunks. Any modern (DX10+) GPU doesn't even have dedicated vertex fetching hardware anymore, and will read the vertex data the same way it reads any other buffer.




#5070019 low precision formats for vertex data?

Posted by MJP on 15 June 2013 - 01:15 PM

Is there some reason that you care about 64-byte alignment?

 

The only thing you should need full 32-bit precision for is position, everything else you could compress. For texture coordinates 16-bit should be sufficient, either using an integer or half-precision float depending on whether you need values > 1 or < 0. Normals should be 16-bit integers with a sign bit, since they're always in the [-1, 1] range. Same for tangents. Typically you store bone weights as 4 8-bit integers, since they're in the [0, 1] range.

EDIT: I forgot to mention you can possibly compress normals and tangents even further by taking advantage of the fact that they are direction vectors, if you're willing to introduce some unpacking code into your vertex shader. Most of the techniques listed here are applicable, or if your tangent frame is orthogonal then you can store the entire thing as a single quaternion.




#5069611 changing code on its roots

Posted by MJP on 13 June 2013 - 07:27 PM

Dynamic linking can definitely be used to implement this, although it's a little wacky to use and will often generate sub-optimal code. Personally I would just do this by pre-compiling several permutations of the shader, with the value of c defined in a a preprocessor macro (similar to what Adam_42 suggests). Doing it this way allows the compiler to completely optimize away the if statement, and also any additional operations performed with the value of c. You can specify the macro definition when compiling the shader using the "pDefines" parameter of D3DCompile, and then just compile the shaders in a for loop.




#5068335 Tangent Binormal Normal

Posted by MJP on 08 June 2013 - 05:36 PM

A water plane aligned with the XZ plane isn't going to match the coordinate space of a tangent-space normal map unless you swap Y and Z. You will probably also need to negate on or both values depending on how your coordinate system is setup.




#5067755 Constant Buffer By Name in HLSL

Posted by MJP on 05 June 2013 - 08:11 PM

You can definitely get a constant buffer location by name. Just obtain the ID3D11ShaderReflection interface for a shader, and then call GetResourceBindingDescByName to get the info for your constant buffer. You can also enumerate all of the resource bindings with GetResourceBindingDesc to see what resources are bound to which slots.




#5067422 What tone-mapping technique are you using?

Posted by MJP on 04 June 2013 - 01:01 PM

Well if you're going with physically based stuff a filmic tonemapper: http://mynameismjp.wordpress.com/2010/04/30/a-closer-look-at-tone-mapping/

 

They're all, supposedly if the name means anything, based off the tonemapping for actual film, which deals obviously with physically base "real world" stuff anyway. There was this huge spiel with data, a pdf, I read months ago explaining exactly why film exposes in that way and how it suits real world scenarios, including real world under/over exposure clipping and light ranges and etc. If anyone knows what I'm babbling about and has it bookmarked then that's the most helpful thing I can think of. Real world light intensity of everything from a moonless night to the middle of a bright day and etc.

 

One of the specific I can remember was a reason for the toes on either end spreading out in an S shape, and that was to avoid sharp clipping in either over or under exposure, as well as ensuring there's still some saturation in both relatively dark shadows and bright highlights.

 

It sounds like you're talking about this presentation, which is from the SIGGRAPH 2010 course about color enhancement and rendering.




#5067209 Multiple textures with sampler

Posted by MJP on 03 June 2013 - 03:43 PM

No you can't do that. The only work around would be to pack multiple textures into a single texture.




#5066562 DirectX 11 Buffer Headache

Posted by MJP on 31 May 2013 - 06:29 PM

The old D9D9 way of dealing with resources was completely broken in terms of how GPU's actually work. When you put a resource in GPU memory it's no longer accessible to the CPU, since it's a completely different memory pool that's not accessible to userspace code. In order to support the old (broken) D3D9 semantics drivers had to do crazy things behind the scenes, and the D3D runtime usually had to keep a separate copy of the resource contents in CPU memory. Starting with D3D10 they cleaned all of this up in order to better reflect the way CPU's work, and to also force programs to use the "fast path" by default by not giving them traps to fall into that would cause performance degradation or excessive memory allocation by the runtime or driver. Part of this is that you can no longer just grab GPU resources on the CPU, and you have explicitly specify up-front what behavior you want from a resource.

That said, why would you ever need to read back vertex buffer data? If you've provided the data, then you surely already have access to that data and you can keep it around for later. You wouldn't be wasting any memory compared to the old D3D9 behavior or using a staging buffer, and it would be more efficient to boot.




#5066207 Sampling a texture in a domain shader [Cg]

Posted by MJP on 30 May 2013 - 12:49 PM

I don't really know Cg, but if tex2D is equivalent to what it was in older HLSL then you can't use it anything except for a pixel shader. This is because it uses screen-space gradients to automatically select the mip level, and gradients can only be computed in a pixel shader. Try using tex2Dlod, or whatever the Cg equivalent is for a function that lets you manually specify the mip level.






PARTNERS