• Create Account

# AliasBinman

Member Since 07 Oct 2007
Offline Last Active Aug 26 2016 03:29 PM

### #5294489Avoiding huge far clip

Posted by on 01 June 2016 - 10:40 AM

Use an infinite far plane.

This is a great paper on improving perspective precision

### #5291903Vertex color interpolation weirdness (or is my hardware trolling me ?)

Posted by on 16 May 2016 - 12:01 PM

The banding you are seeing is expected with the FP10 format. The format is 6 bits mantissa and 4 bit exponent. So from 0.5 to 1.0 you have 64 unique codes (2^6). 1 / 0.0156 == 64.

The banding is a quantisation when storing to the FP10 blue format.

A solution is is to add noise or dither at export time to mask the banding artifacts.

Here is a great presentation on this.
http://loopit.dk/banding_in_games.pdf

### #5283057Math behind anisotropic filtering?

Posted by on 23 March 2016 - 09:56 PM

The best way to think of it is to picture what a pixel looks like when projected into texture space. If the surface is aligned with the camera near plane then the pixel will map to a square in texture space. It could be freely rotated however and look like a diamond. For point sampling the texel closest to this square sample is used. For bilinear its a weighted average of the 4 nearest. For the case of mip mapping a mip level is selected such that the square is about 1 texel unit area. Think about this square projected onto each mip level. As each mip level goes down each texel effectively doubles in each axis.

For anisotropic filtering then the pixel is elongated and forms a thin rectangle. There is no mip level you can use which either fully represents all of the rectangle. Instead the rectangle can be subdivided into smaller rectangles such that they are more square like. Each of this subdivisions have their own texel centre and can be each bilinear filtered and averaged for a result whcih better represents all the texels that touch the rectangle.

Hope this helps. I know pictures will help better explain. I'm sure there is so good visual explanations out there.

### #5282454Per Triangle Culling (GDC Frostbite)

Posted by on 21 March 2016 - 04:10 PM

The point is that the ALU processing capabilities far exceed that of the fixed function triangle setup and rasterizer. Using compute you can prune the set of triangles to get rid of the ones that don't lead to any shaded pixels and are therefore discarded. Its purely there to get a bit more performance.

### #5259209How to draw 3d billboard fixed independent distance size?

Posted by on 26 October 2015 - 05:01 PM

Its probably worth calculating the distance usianf a projection onto the view space forward vector

So

float dist = dot(billboardpos-cameraPosition, cameraFwd)

### #5250628Non-linear zoom for 2D

Posted by on 04 September 2015 - 05:12 PM

The following code will do what you want assuming you have a constant update tick.

float Size = 16.0f;

then in the update

const float RateDecay = 0.99f;

Size = Size*RateDecay;

If you don't have a fixed timestep you can do the same using exponential function

size = size * log(dt * kConstant)

### #5249003HDR tonemapping without render-to-texture?

Posted by on 26 August 2015 - 09:40 AM

Also post processing effects such as DOF, Motion Blur and bloom look far better if they operate on the HDR data prior to tonemapping.

### #5244996Tinted bloom?

Posted by on 07 August 2015 - 12:07 PM

Looks like the bloom buffer is tapped 2 times with a centered radial UV shift and each tap is tinted with a purple and orange tint and added together.

### #5236764Imposter Transitioning to Mesh

Posted by on 25 June 2015 - 11:10 AM

I created a simple demo on this using WebGL. The blog post detailing what is happening isn't finished but in the mean time you can look at the demo and see what I am doing in the ModelWarpVS shader.

https://dl.dropboxusercontent.com/u/20691/AAImpostor.html

### #5233835Roughness in a Reflection

Posted by on 09 June 2015 - 11:05 AM

This is the best blog that describes this irradiance convolution.

https://seblagarde.wordpress.com/2012/06/10/amd-cubemapgen-for-physically-based-rendering/

For a gentler introduction follow some of the references and in particular the AMD presentations on this.

### #5233591Difference between camera velocity & object velocity

Posted by on 08 June 2015 - 12:56 PM

The main difference is that camera motion blur can be done via the depth buffer and some static per frame constants where as object motion blur needs a velocity per pixel or at least some per pixel data needed to calculate it. If you are doing motion blur on skinned objects then this requires extra work and more bones in the VS or at least cache off the skinned data over at least one frame.

Ultimately its because of efficiency and ease of integration.

### #5231538How to get started with shadertoy ?

Posted by on 28 May 2015 - 01:50 PM

The first set are from iq who is the co-creator of shadertoy

http://iquilezles.org/www/index.htm (distance functions and others)

http://iquilezles.org/www/material/nvscene2008/nvscene2008.htm

This is also a great resource (as well as its references at the bottom).

The great advantage of shadertoy is the near instant feedback loop from trying somethig to seeing it on screen.

### #5231274Imposter Transitioning to Mesh

Posted by on 27 May 2015 - 10:53 AM

I wrote an impostor based system for a sports game where we would draw up to 20,000 instances of the crowd. I would then swap in the 3D model when you got close such that the impostor texture would start to magnify. I managed to do this with no noticeable pop.

I have a blog post half written on how I did this and will update this thread when I finish it. The secret sauce however is to differentiate between internal and external perspective with regard to the impostor meshes. By this I mean we rendered the models to the impostor texture with an ortho projection. The quads were then rendered into the world with a perspective projection. Then when drawing the 3D mesh I would do this same ortho in local space, perps in world space operation. This internal ortho projection was then lerped to a full persp projection as it then got closer to the camera. The downside is you get a weird rotation of the 3D model as it approaches the camera but its something that you have to look for.

Video of the crowd in action

### #5224925GPU Ternary Operator

Posted by on 22 April 2015 - 03:56 PM

That optimization could potentially make things worse. This involves an indirect data look up which can be slower than simply a couple of predicated moves.

### #5222329What to do when forward vector equals up vector

Posted by on 09 April 2015 - 05:30 PM