Jump to content

  • Log In with Google      Sign In   
  • Create Account

TheKreature

Member Since 24 Mar 2006
Offline Last Active Yesterday, 01:21 PM

Topics I've Started

Detecting Boundaries in triangles with adjacency

21 January 2016 - 06:16 PM

I have been working with geometry shaders and triangles with adjacency for a number of years.

I only just had the case where I want to detect boundaries (-1 in indices 1, 3, 5, given triangle with adjacency indices 0,1,2,3,4,5)...

 

I am wondering what the input assembler does with invalid (value -1) indices and what is the best way to go about detecting them would be?

I haven't found anything in the OpenGL or D3D11 docs regarding this, and a few tests including:

 

- Comparing epsilon in distance(input[index0], index[index0+1]) < 1e-6f (I.e., for edge 0, indices 0, 1)

- Comparing epsilon in distance(input[index2], index[index0+1]) < 1e-6f (I.e., for edge 0, indices 2, 1)

- Comparing epsilon in distance(input[index0+1], input[index2+1])) < 1e-6f (I.e., for edge 0, indices 1 and 3).

- Testing for NAN in incoming positions in indices 1, 3, 5. I.e., input[index0+1].worldPosition != input[index0+1].worldPosition, or isnan(input[index0+1].x)

 

These tests haven't yielded any output for what should be boundary indices.

Obviously, I can scan my adjacency buffer on the system side for -1, and set invalid adjacency indices to [index-1], but I'm now kinda curious as to what the input assembler is doing.

 

Does anyone know how to test for this?

 

 

Adjacency Ref:

adjacencies.jpg


Tool release for physically based rendering

09 November 2014 - 12:43 PM

I've recently released an image based lighting baker for physically based rendering to generate preconvonvolved specular cube maps (computed against a user specified brdf) using the seperable method proposed by Epic during Siggraph 2013.

 

The tool also bakes out the BRDF LUT, and a diffuse irradiance environment map. Cubemaps are saved as both MDR and HDR.

 

You can find the tool at:
https://github.com/derkreature/IBLBaker
 
There are also a number of walkthrough and example videos at:
http://www.derkreature.com/

I have also supplied 2 Maya example scenes to test the cubemap outputs using Viewport 2.0 and cgfx.

 

Please contact me through my github account if you find any bugs, have questions or have any suggestions. The code is based on some of my older framework code. You'll have to hold your nose around some of the more horrible bits.

 

Hopefully some of you find this useful.

If there is any interest, I'd consider writing an article on this.

 

I thought I'd throw in another quick demo of this tech applied to character rendering. (Still proof of concept really):


D3D11_FILL_WIREFRAME order of magnitude slower with 3xx.xx drivers?

09 December 2012 - 04:48 PM

Has anyone else noticed that D3D11_FILL_WIREFRAME is an order of magnitude slower on 3xx.xx drivers for 480, 580 and 680 cards no matter what the render state is?
The nvidia drivers have become so incredibly unstable in wireframe on consumer cards that I resorted to geometry shaders for reasonable performance several months ago. I was merely curious.

FillMode=D3D11_FILL_WIREFRAME results in no shading with nvidia 3xx.xx drivers

10 July 2012 - 03:07 PM

I have noticed that attempting to render with fillmode=D3D11_FILL_WIREFRAME and AntialiasedLineEnable=false appears to result in the rasterizer not evaluating the pixel shader with nvidia 3xx.xx drivers.

- Raster ops succeed on 296.10 drivers and previous.
- Raster ops succeed on 3xx.xx drivers with fillmode=D3D11_FILL_WIREFRAME and AntialiasedLineEnable=true.
- Raster ops appear to fail on 3xx.xx drivers with fillmode=D3D11_FILL_WIREFRAME and AntialiasedLineEnable=false.

I was wondering if anyone else has encountered the same problem?

Tested hardware:
480GTX (SLI enabled/disabled)
580GTX (SLI enabled/disabled)
680GTX (SLI enabled/disabled)

Thanks.

Windowed performance issues with OMSetRenderTargetsAndUnorderedAccessViews

03 September 2010 - 05:29 AM

Has anyone else noticed performance issues in windowed mode when binding UAV's with OMSetRenderTargetsAndUnorderedAccessViews?
I'm wondering if this is a buffer configuration, device configuration, driver, or vendor issue...
I've disabled all of the code that actually uses the UAV's. For a fairly complex scene (over a million tris with some tessellation thrown in), I'm getting 80 fps@1920x1200 in fullscreen. In windowed mode the same scene configuration plummets to around 10fps...
If I remove the OMSetRenderTargetsAndUnorderedAccessViews, the frame rate goes back to normal. I'm going to start dicking around with nsight or whatever nvidia calls it to get some proper timings, but I was just wondering if this is a known issue?

I'm also wondering if this is some windowed + UAV + sli problem.
I'm running 2x GTX480's... Anyway, experimentation awaits...

PARTNERS