Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 18 May 2011
Offline Last Active Dec 23 2015 08:28 AM

Topics I've Started

GPU Ternary Operator

22 April 2015 - 12:24 PM

I understand why the ternary operator branches on the GPU if there is any in-line computation, like:

// Branching on the (value+1)!
float value = 3;
value = (value < 4) ? (value) : (value+1);

but.. if the ternary operator is literally switching between two variables should it necessarily branch?

// Why should this branch?
float value = 3;
float valuePlus = value + 1;
value = (value < 4) ? value : valuePlus;

// It should be equivalent to:
float value = 3;
float valuePlus = value + 1;

float values[2];
values[0] = value;
values[1] = valuePlus;

int valueIndex = (int)(value < 4);
value = values[valueIndex];

// And no branching!

At least in my head it should just be an address offset between two values.

I know this is a nitpicky branching concern, but it's an interesting mini-optimization depending on the usage.

Does anyone know if GPU drivers detect and make this sort of optimization or is this a non-concern?

DX11 Multithreaded Resource Creation Question

08 July 2014 - 01:45 PM

This page says that coarse synchronization is used to prevent concurrent thread device access.




Can I design a multithreaded resource loader with only multithreading in mind and expect that the cards/drivers that don't support it will serialize the resource creation? I can't be sure, but it seems like I have heard stories of this causing crashes on drivers without support for multithreaded resource creation. 


Can anyone confirm this? I don't have a system that I can try it out on!

Cascaded Shadow Maps Optimization Concept?

30 May 2014 - 03:29 PM

I just had a random thought the other day that sounds feasible, but I wonder if any of you guys can poke a hole in the concept.


The depth renders for cascaded shadow maps are orthographic, which means that there isn't any perspective foreshortening on the model renders. Every object with the same model/rotation/scale should have the same relative depth extents regardless of where they are on the shadow map.


I'm thinking it might be possible to render the model depth once to a render target texture, and then for every instance of that model on the shadow cascade you just draw quads and read the depths from the depth-texture that was rendered up front. You would also have to offset the the depths read in from the texture by the instance's distance from the orthographic camera. Then you would just specify the depth output in the pixel shader to allow for depth-testing like normal.


Depth rendering is pretty fast so this might be counterproductive because of the texture reading. You would probably have to atlas several model depth renders onto the same render target to minimize the number of binds/state-changes.


Can anyone see a flaw with the idea, or should I try out an implementation of it and tell you all how it goes?

Gamma Correction Issues

15 January 2014 - 04:59 PM

I'm trying to get gamma correction to work in my DX11 tile deferred renderer, but something in my pipeline must be doing something that I don't realize.


When I manually do gamma correction with pow(color, 2.2) and pow(finalColor, 1.0/2.2) it looks great! When I use the sRGB formats it appears way too bright.


My pipeline is as follows.


1. Create diffuse G-Buffer by reading from sRGB textures and writing to a sRGB render target. This is a pass-through.

2. Perform lighting in compute shader. Read from the sRGB diffuse G-buffer and compute lighting. Write the result to a DXGI_FORMAT_R16G16B16A16_FLOAT. I can't use an sRGB write format here because I have to write the result into a UAV.

3. Run post-processing. Read from the DXGI_FORMAT_R16G16B16A16_FLOAT and write down to the sRGB render target back buffer to be presented to the screen. I've disabled post-processing steps for the moment so this is just a pass-through.

4. Present.


What am I missing? >_>


Also, thanks!

Texture Compression Questions

02 December 2013 - 11:08 AM

There's a suprising lack of documentation on loading DXT compressed data into DX11. Here are a few questions:


1. Is creating a compressed texture resource as simple as passing in the BC compressed data like normal texture data, but with the DXGI_FORMAT_BC--- flags as the format?


2. How do I create mip maps of BC compressed data? I assume I should create mip maps with the uncompressed textures, and then compress them afterwards. Which leads me to..


3. How do I compute mip map offsets for compressed textures, so I can create/load them in properly? I want to compress a 2D texture array and generate mip maps before game start, but I'm not quite sure how to load them all up on resource creation.


4. Do BC formats need to be sampled in some special way, or does DX11 take care of it?