Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

d000hg

Is lighting per pixel or per vertex?

This topic is 5915 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

With the standard vertex formats I build using the FVFs (as opposed to writing shaders) can I make D3D do per-pixel lighting calculations? I know in a pixel shader it''s relatively simple using dot products and stuff, but I don''t want to look at shaders as they''re not that widely supported and I don''t reckon I need anything that advanced for my project. So can I make d3d do per-pixel as standard? Oh additional, can I use a bump-map without writing a shader?

Share this post


Link to post
Share on other sites
Advertisement
*sigh*

http://www.gamedev.net/community/forums/topic.asp?topic_id=96846


I''ll reiterate:

D3D does per-vertex lighting. If you want per-pixel you do it with DOT3 or a pixel shader or a lightmap. It will not do per-pixel lighting any other way.

Yes, you can perform bump mapping without a shader. Use the EMBM technique, the DOTPRODUCT3 technique or the EMBOSS technique - all are perfectly possible without writing any shader code. The key is in the multitexture operations you set up.

--
Simon O''Connor
Creative Asylum Ltd
www.creative-asylum.com

Share this post


Link to post
Share on other sites
1)Ok then, looked at the DOTPRODUCT3 thingy. This''ll let me do bump-mapping, but is it actually per-pixel? I''m sure in the DX sample the lighting vector was only calculated once, not per pixel. My definition of per-pixel rendering is that a light over the middle of a large triangle lights the middle more than the vertices, not less as in a standard interpolation method.

2)IF I can get per-pixel bump-mappi9ng as I describe then surely I can do normal un bumpy per-pixel mapping? The demo uses the TFACTOR and a bump texture, I''d want just the lighting vector and a single surface normal, how is that achieved?

3)When gourad shading, D3D interpolates the vertex normals, but using DOTPRODUCT3 it appears normals are simply read from a texture. This implies you have a flat suface with varying normal vector, but how can I get these normals combined with a interpolated suface normal so that there are no jumps in lighting at joins of triangles (like you get in flat shading)?

Am I being really dense here?

Share this post


Link to post
Share on other sites
1)
a. D3DTOP_DOTPRODUCT3 is absolutely 100% per-pixel. The clue is that the above state is a texture stage state - all texture stage operations are always PER PIXEL.

b. The important part of the diffuse lighting equation is the dot product operation which gets the cosine of the angle between two normalised vectors.

c. For a directional light, such as the D3D SDK sample, the light vector is the same for all pixels, the result of the dot product isn''t.

d. For a point light, the direction to each vertex can be calculated then **encoded into a vertex colour**. This colour is interpolated due to Gouraud shading, therefore the light vector gets linearly interpolated for *every pixel*.

e. Since the normals are (usually) encoded into a texture map, a *unique* normal is also present for *every pixel*.

f. For *every pixel* the D3DTOP_DOTPRODUCT3 operation takes the two input vectors (say one in a texture map and the other interpolated in a vertex colour) and performs a dot product between them. After the dot3 operation, all the components (R,G,B,A) of the *pixel* gets set to the result of the dot product (it becomes a shade of grey between white and black).

g. Since the vectors are interpolated/stored and the lighting is done *per pixel*, you don''t (with proper care) get drops in illumination at the middle of a triangle.


2)
a. You could encode the surface normal vector into one of the per vertex colours, then use TFACTOR for the light direction.

b. Or you could use a normal map texture with all its texels set to the same value.


3)
quote:
"When gourad shading, D3D interpolates the vertex normals"...


a. Nope, D3D interpolates **COLOURS** not normals. If it interpolated normals across the polygon, that''d be Phong shading - Gouraud shading interpolates the colour *after* the illumination has been calculated per vertex (D3D illumination is callculated with "Phong Illumination" and interpolated with "Gouraud Shading").

b. DOTPRODUCT3 is essentially Phong shading the main drawback/difference is the conversion of vectors to colours is essentially a quantization so it''s less precise. The other big issue is what happens to the per-pixel normal vector when the normal map texture gets filtered, mipmapped etc.

c. I don''t quite understand what you mean about needing to interpolate a surface normal to prevent jumps at triangle joins. Since the texture encodes the normals *for that particular triangle*, there is no need for any other normal information. You don''t get any ''jumps'' or seams. Think of a plain texture map - you don''t get any seams if the mesh is textured properly - likewise if the normal map is applied properly, every pixel of every triangle has its own normal, including those on the edges of triangles.

d. The other places the normal map has issues apart from those I mentioned above are if its texture is mirrored or if you can''t have unique normal map textures per pixel. In the latter case, you need to start using texture (tangent) space where the normal maps are encoded in their own space rather than object space and you move the light vector into that space per vertex.

--
Simon O''Connor
Creative Asylum Ltd
www.creative-asylum.com

Share this post


Link to post
Share on other sites
Thanks Simon, this is all becoming a lot clearer. I've looked at the d3d sample which as you say uses a directional light; do you have a link to an example for a point/spot light source? When you talk about using vertex colours as vertex normals and the like, doesn't that mean I can't store colour information in my vertices?
And what when you have multiple lights? Do you set it up for a point source and then it'll work for directional too? Does the shader get called for each light to be applied to the polygon?

Additional, how does one turn a heightmap into a bump map? Actually, that part's ok but the sample encodes the heightmap as a texture somehow and how is that accomplished - is each RGBA value in the texture treated as simply a DWORD value for the height?

[edited by - d000hg on June 5, 2002 4:36:05 PM]

Share this post


Link to post
Share on other sites
I believe you are confusing height maps with normal maps. Normal maps use RGBA for the normal direction (R = X, G = Y, B = Z, A = W) height maps on te other hand are greyscale, and I believe that the texture can be just merged using a pixel shader. You know how shading something (in real life) makes makes it look like it''s indented? Well that''s the pricipal behind height maps.

Share this post


Link to post
Share on other sites
No, the DGraphics sample uses a heightmqap to generate a bumpmap by looking how much the height changes from pixel to pixel. They also store thie heightmap in a texture, and I just wondeered what format they used to store the information.

Share this post


Link to post
Share on other sites
All I can tell you is how it''s usually done.. It seems that you have the source with you, why don''t you just check? (chances are it''s greyscale)

Share this post


Link to post
Share on other sites
quote:
Original post by d000hg
No, the DGraphics sample uses a heightmqap to generate a bumpmap by looking how much the height changes from pixel to pixel. They also store thie heightmap in a texture, and I just wondeered what format they used to store the information.


Here's the deal...

The heightmaps aren't used directly by the DOTPRODUCT3 technique, they are **converted** at load time into NORMAL maps where each pixel represents a vector.

The conversion process takes the heightmap and works out a **pixel sized triangle** for every pixel in the heightmap.

The vertices for this triangle representing the current pixel simply use the x,y,height of 3 neighbouring pixels (left, right and above/below).

Once you have 3 vertices of a triangle, you can calculate its normal. After normalisation, this normal vector is encoded into a colour value and becomes the pixel in the NORMAL map corresponding to that height in the height map.


--
Simon O'Connor
Creative Asylum Ltd
www.creative-asylum.com

[edited by - S1CA on June 7, 2002 8:53:47 AM]

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!