# interpolating scalar instead of vector

This topic is 2109 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi,

In most simple shader of a light, a vertex postion gets subtracted from light postion and the result is normalized. Then, this vector is an input into pixel function, along with normal of the vertex. Thus, pixel shader interpolates this light vector and the normal vector and in the function it dots the two vectors getting light component, a single float. My question is, wheather I could dot those two vectors in vertex function already, and pass the component single float into pixel function. Would it result in the same value as the component would be interpolated?

##### Share on other sites
Nope, that would be vertex lighting (here's an example with Gouraud shading), because you do your lighting calculation per vertex, not per pixel. A high triangle count can give you pretty good if not even equal result though. Edited by unbird

##### Share on other sites

Yes you can. No, value will be different.

##### Share on other sites

If you go down that route then it will interpolate across vertices instead of pixels. That basically means all the pixels across vertex bounds will have the same value.

Edited by BornToCode

##### Share on other sites

As a general rule, if you can plot a function as a straight line, then you can safely do the calculation in your vertex shader and rely on interpolation for the rest.  If a function plots as a curve (or anything other than a straight line, e.g. a wave) then you can't (well, you can, but you'll either need an extremely fine level of tesselation or you'll get artefacts).

Normalization involves a square root, and a square root plots as a curve, so you should do that in your pixel shader.  However, your first calculation (lpos - vpos) is a straight line, so that's OK to do in your vertex shader.

Likewise, a dotproduct can be expressed as a square; if you've got dot (x, y) and if x == y, then (assuming 3 component vectors) dot (x, y) = x.x2 + x.y2 +x.z2.  Again, this plots as a curve, so it's pixel shader work.

##### Share on other sites

As a general rule, if you can plot a function as a straight line, then you can safely do the calculation in your vertex shader and rely on interpolation for the rest.  If a function plots as a curve (or anything other than a straight line, e.g. a wave) then you can't (well, you can, but you'll either need an extremely fine level of tesselation or you'll get artefacts).

Normalization involves a square root, and a square root plots as a curve, so you should do that in your pixel shader.  However, your first calculation (lpos - vpos) is a straight line, so that's OK to do in your vertex shader.

Likewise, a dotproduct can be expressed as a square; if you've got dot (x, y) and if x == y, then (assuming 3 component vectors) dot (x, y) = x.x2 + x.y2 +x.z2.  Again, this plots as a curve, so it's pixel shader work.

uff, my main concern is to put normalization to vertex shader, and mainly becouse of reciprocal computation. Division is extremly expensive. I believe that unit vector interpolates into an unit vector, since barycentric coordinates sum is always 1.0. So I just will dot the interpolated light direction with interpolated normal in pixel shader.

##### Share on other sites

The linear interpolation between two unit vectors is not itself a unit vector. Normalization is a non-linear operation, thus order is relevant: normalization followed by interpolation is not the same thing as interpolation followed by normalization.

##### Share on other sites

uff, my main concern is to put normalization to vertex shader, and mainly becouse of reciprocal computation. Division is extremly expensive.

I wouldn't be so sure about that, and it seems as though you may be pre-emptively optimizing, with perhaps a smidgin of assuming that rules which hold good for CPUs also hold good for GPUs.

In the old days (i.e. ~10 years ago) it was common to see a cubemap lookup used as a replacement for per-pixel normalization.  Nobody does that anymore, and with very good reason: the performance of ALU instructions has increased at a rate that has far outpaced anything else, and - if your shader compiler optimizes them to interleave with texture lookups - they can often be had effectively for free.

In other words, pulling tricks to avoid a division is no longer something worth worrying about, at least not in the common case.  If you know that you're programming to a GPU on which division is expensive (i.e. the vendor has documented it as so), then is the time to start looking at alternatives.  Not before.

So just normalize in your pixel shader and be done with it; I'm willing to bet that you'll find very little (or even no) significant performance difference.

• 18
• 29
• 11
• 21
• 16