Jump to content

  • Log In with Google      Sign In   
  • Create Account

AliasBinman

Member Since 07 Oct 2007
Offline Last Active May 20 2016 05:25 PM

#5291903 Vertex color interpolation weirdness (or is my hardware trolling me ?)

Posted by AliasBinman on 16 May 2016 - 12:01 PM

The banding you are seeing is expected with the FP10 format. The format is 6 bits mantissa and 4 bit exponent. So from 0.5 to 1.0 you have 64 unique codes (2^6). 1 / 0.0156 == 64.

 

The banding is a quantisation when storing to the FP10 blue format. 

 

A solution is is to add noise or dither at export time to mask the banding artifacts.

Here is a great presentation on this.
http://loopit.dk/banding_in_games.pdf




#5283057 Math behind anisotropic filtering?

Posted by AliasBinman on 23 March 2016 - 09:56 PM

The best way to think of it is to picture what a pixel looks like when projected into texture space. If the surface is aligned with the camera near plane then the pixel will map to a square in texture space. It could be freely rotated however and look like a diamond. For point sampling the texel closest to this square sample is used. For bilinear its a weighted average of the 4 nearest. For the case of mip mapping a mip level is selected such that the square is about 1 texel unit area. Think about this square projected onto each mip level. As each mip level goes down each texel effectively doubles in each axis.

 

For anisotropic filtering then the pixel is elongated and forms a thin rectangle. There is no mip level you can use which either fully represents all of the rectangle. Instead the rectangle can be subdivided into smaller rectangles such that they are more square like. Each of this subdivisions have their own texel centre and can be each bilinear filtered and averaged for a result whcih better represents all the texels that touch the rectangle.

 

Hope this helps. I know pictures will help better explain. I'm sure there is so good visual explanations out there. 




#5282454 Per Triangle Culling (GDC Frostbite)

Posted by AliasBinman on 21 March 2016 - 04:10 PM

The point is that the ALU processing capabilities far exceed that of the fixed function triangle setup and rasterizer. Using compute you can prune the set of triangles to get rid of the ones that don't lead to any shaded pixels and are therefore discarded. Its purely there to get a bit more performance. 




#5259209 How to draw 3d billboard fixed independent distance size?

Posted by AliasBinman on 26 October 2015 - 05:01 PM

Its probably worth calculating the distance usianf a projection onto the view space forward vector

So

 

float dist = dot(billboardpos-cameraPosition, cameraFwd)




#5250628 Non-linear zoom for 2D

Posted by AliasBinman on 04 September 2015 - 05:12 PM

The following code will do what you want assuming you have a constant update tick.

 

float Size = 16.0f;

 

then in the update

const float RateDecay = 0.99f; 

Size = Size*RateDecay;

 

If you don't have a fixed timestep you can do the same using exponential function

size = size * log(dt * kConstant)




#5249003 HDR tonemapping without render-to-texture?

Posted by AliasBinman on 26 August 2015 - 09:40 AM

Also post processing effects such as DOF, Motion Blur and bloom look far better if they operate on the HDR data prior to tonemapping.




#5244996 Tinted bloom?

Posted by AliasBinman on 07 August 2015 - 12:07 PM

Looks like the bloom buffer is tapped 2 times with a centered radial UV shift and each tap is tinted with a purple and orange tint and added together.




#5236764 Imposter Transitioning to Mesh

Posted by AliasBinman on 25 June 2015 - 11:10 AM

I created a simple demo on this using WebGL. The blog post detailing what is happening isn't finished but in the mean time you can look at the demo and see what I am doing in the ModelWarpVS shader. 

 

https://dl.dropboxusercontent.com/u/20691/AAImpostor.html




#5233835 Roughness in a Reflection

Posted by AliasBinman on 09 June 2015 - 11:05 AM

This is the best blog that describes this irradiance convolution.

https://seblagarde.wordpress.com/2012/06/10/amd-cubemapgen-for-physically-based-rendering/

 

 

For a gentler introduction follow some of the references and in particular the AMD presentations on this.




#5233591 Difference between camera velocity & object velocity

Posted by AliasBinman on 08 June 2015 - 12:56 PM

The main difference is that camera motion blur can be done via the depth buffer and some static per frame constants where as object motion blur needs a velocity per pixel or at least some per pixel data needed to calculate it. If you are doing motion blur on skinned objects then this requires extra work and more bones in the VS or at least cache off the skinned data over at least one frame.

 

Ultimately its because of efficiency and ease of integration. 




#5231538 How to get started with shadertoy ?

Posted by AliasBinman on 28 May 2015 - 01:50 PM

Here are a few links. 

The first set are from iq who is the co-creator of shadertoy

 

http://iquilezles.org/www/index.htm (distance functions and others)

http://iquilezles.org/www/material/nvscene2008/nvscene2008.htm

 

 

This is also a great resource (as well as its references at the bottom).

http://graphics.cs.williams.edu/courses/cs371/f14/reading/implicit.pdf

 

 

The great advantage of shadertoy is the near instant feedback loop from trying somethig to seeing it on screen.




#5231274 Imposter Transitioning to Mesh

Posted by AliasBinman on 27 May 2015 - 10:53 AM

I wrote an impostor based system for a sports game where we would draw up to 20,000 instances of the crowd. I would then swap in the 3D model when you got close such that the impostor texture would start to magnify. I managed to do this with no noticeable pop. 

 

I have a blog post half written on how I did this and will update this thread when I finish it. The secret sauce however is to differentiate between internal and external perspective with regard to the impostor meshes. By this I mean we rendered the models to the impostor texture with an ortho projection. The quads were then rendered into the world with a perspective projection. Then when drawing the 3D mesh I would do this same ortho in local space, perps in world space operation. This internal ortho projection was then lerped to a full persp projection as it then got closer to the camera. The downside is you get a weird rotation of the 3D model as it approaches the camera but its something that you have to look for.

 

Video of the crowd in action 




#5224925 GPU Ternary Operator

Posted by AliasBinman on 22 April 2015 - 03:56 PM

That optimization could potentially make things worse. This involves an indirect data look up which can be slower than simply a couple of predicated moves. 




#5222329 What to do when forward vector equals up vector

Posted by AliasBinman on 09 April 2015 - 05:30 PM

This is a good blog which may help you.

 

http://blog.selfshadow.com/2011/10/17/perp-vectors/




#5131093 Old school 3D engines

Posted by AliasBinman on 13 February 2014 - 12:56 PM

Have you read the Michael Abrash series of books. Its more than just graphics programming but still a great read.

 

http://www.drdobbs.com/parallel/graphics-programming-black-book/184404919






PARTNERS