Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 06 Sep 2002
Offline Last Active Apr 21 2016 05:18 AM

#5156702 OpenGL 4.4 render to SNORM

Posted by on 29 May 2014 - 07:27 AM



is it possible to a SNORM texture in OpenGL 4.4? Apparently they are not a required format for color targets in 4.2.


I want to render to a RG16_SNORM target to store normals in octahedron format. The linked paper contains code that expects and outputs data in the [-1, 1] range and I was just assuming that it would automatically work with SNORM textures.


The output seems to get clamped to [0, 1] though. It checked with a floating point render target and got the expected results so I don't think it is an issue with the code.


Should this work? Am I maybe doing something wrong when creating the texture?




D3D11 hardware supports SNORM render targets, so I guess I'm doing something wrong.

#5134689 Uniform buffer updates

Posted by on 26 February 2014 - 02:45 AM

What I currently do:


for (o : objects)
    bufferSubData(smallUniformBuffer, o.transformation);

What I think I should be doing:

offset = 0;
for (o : objects)
    memory[offset] = o.transformation;

bufferData(hugeBuffer, memory);

offset = 0;
for (o : objects)
    bindBufferRange(hugeBuffer, offset);

At first I was a bit frustrated because I am used to the Effects of the D3D-Sdk, but after reading the presentation about batched buffer updates it seems a D3D application can also benefit from doing it this way. So the architecture can be the same for both APIs.



"Were updates far enough apart to avoid conflicts (can be perf hit if call A writes to first half of block A, and call B writes to second half of block A)?"

Can you explain this a bit more. Are you saying, that it is not good to write to the first half of the buffer and then to the second half, although the ranges don't intersect?

#5117876 Integrate BRDF from UE4 presentation

Posted by on 18 December 2013 - 10:11 AM

I'm trying to implement the image based lighting described in the UE4 presentation.


The have a function called IntegrateBRDF() which is supposed to return two values (a, b) that are both somewhere between 0 and 1. In my tests b seems to be in the [0, 0.15] range however.

Has anyone played with this function as well? Do I probably have an implementation error or is it supposed to be like that?

#5019200 Compact World Space normal storage in g-buffer

Posted by on 08 January 2013 - 03:40 PM

Until now I used to store the normals in my g-buffer in view space and encoded with the spheremap technique that is described is here (two 16 bit unorm channels). That works good.


I find myself using world space normals increasingly often for world space light probes and stuff like that, so I thought it might be beneficial to just store world space normals in my g-buffer.


The spheremap technique is not suitable any longer because I get completely wrong normals close to the "z-pole" when viewing the normals of a sphere for instance. 

I then tried storing the normals as mentioned in the paper about the Unreal Engine 4 demo. They simply use a R10G10B10A2 format. It has the same space requirements, but even with a specular power of 128 I can clearly see artifacts in the hightlights on a sphere, which is disappointing. (Of course the problem is much less apparent with more complex geometry and normal maps.)


I also tried a R16G16B16A16 format which looks fine. But I am curious I you know of some more compact storage method that works well with world space normals?

#4960913 Dual Sphere-Unfolding/Omni-directional shadow maps

Posted by on 19 July 2012 - 07:07 AM

I'm curious what people are using for omni-directional shadow maps these days.
I think that dual paraboloid and cube map shadows are two typical choices. I have used the latter in the past. While looking for alternatives that might perform faster with comparable quality I came across Dual Sphere-Unfolding Shadow Maps, which are rendered in a single pass. Has anyone tried it? Is there a more detailed explanation of the method around? Any other methods that are worth looking into?

#4873156 Problem with ConstructGSWithSO (D3DX11 Effect)

Posted by on 16 October 2011 - 11:20 AM

I managed to get away the errors by specifying uint normal : Normal as the output for the second stream which matches the stride of my normals, because they are compressed. However whatever I output in the geometry shader it does not seem to end up in the output stream.

EDIT0: I confirmed that the second stream is not being written too, by outputting the same data to both streams. Reading from those streams later the data is not identical.

EDIT1: Had to specify [maxvertexcount(2)]. Now it works! :)

#4859108 Reading render target on CPU: performance issue

Posted by on 08 September 2011 - 10:52 AM

Thanks for the reply!

Well, I was convinced that the reading back was giving me a hard time and described my steps to remedy it. Turns out that you were right in your suspicion that I was going after the wrong thing. Skipping the read back and just rendering all the faces is not significantly faster, so that seems where most of the time is going.

#4856893 Do I have to Release everything before exiting?

Posted by on 02 September 2011 - 01:22 PM

Setting the swapchain to not fullscreen before releasing it is the correct approach, as is explained here somewhere.
I'm not really sure, but I think it is not necessary to release the pointers before you quit the application. But it always seemed cleaner to me.

#4852942 Probably a dumb question (3D vector curves?)

Posted by on 23 August 2011 - 02:30 PM

Someone will come with a more detailed answer, but in principle most 3d modelling software's allow you to model with curves and all that. The model is later converted to a triangle based mesh because that is the only thing that hardware can render.
Also, curves are not vectors but rather get translated into vectors at a certain resolution. Its a very similar thing to curve -> triangle I believe.

#4846781 Loading Shaders for multiple models

Posted by on 09 August 2011 - 11:30 AM

As far as I can see, you are setting the shading parameters for all the models one after another without rendering anything, effectively overwriting the previous parameters. Than you start rendering, but of course now the parameters for the last model are set. Do the SetMatrix() stuff just before you render the model.

#4843491 Forward rendering - switching lights

Posted by on 02 August 2011 - 04:06 AM

Just wanted to point out, that you can render as many lights as you want it you are willing to do more than one pass and blend the results together.

#4790039 D3D11: Vertex Shader - Geometry Shader linkage error

Posted by on 24 March 2011 - 11:52 AM

My bad. Turns out I was forgetting to explicitly set the geometry shader to 0 in a later effect, which caused the described behavior.

#4768695 HLSL compile in Visual C++

Posted by on 02 February 2011 - 02:27 PM

Just throwing in another idea:
You could compile the shaders and store the binary data when first encountering them. Now they are cached for the next time you need them, as long as nothing changes. If the binary file is older than the source file you compile again. That way start-up times are usually low(er) (because you are just loading the pre-compiled data), but you can still modify shaders easily.

#4760718 Stencil buffer vs. scissor rectangle

Posted by on 18 January 2011 - 09:00 AM

I am wondering whether there is a performance difference between using the stencil buffer and a scissor rectangle. I can imagine that setting the scissor rectangle is often cheaper than preparing the stencil buffer. But assuming that both have already been set up and are blocking the same region, will one or the other grant more performance on the following drawing operations?
Or put another way: If the stencil buffer is prepared, can you gain something by additionally using a scissor rectangle?
I'm not sure, but doesn't the scissor rectangle "work" earlier in the pipeline and theoretically prove added benefit?

#4493706 [SOLVED] Normal map mipmapping. Tangible benefits?

Posted by on 19 July 2009 - 12:39 AM

Original post by n00body
So I wanted to ask if anyone could chime in on the matter, mentioning tangible benefits for mipmapping normal maps? Why I would choose to jump through all those hoops, versus just linear filtering one level.

I wouldn't say its that much of a hassle. If you re-normalize your normal after sampling (which seems to be a good idea anyway), you shouldn't have a problem with mipmapping.
Also, there are tools, for example by nvidia, that contstruct the normalized mipmaps for you. So its just a matter of doing a small pre-processing step and you're ready to go.

[Edited by - B_old on July 19, 2009 9:39:49 AM]