Bump Mapping with CG

Started by
4 comments, last by Luca D 14 years, 10 months ago
Hi, I'm trying to implement bump mapping in a demo application. If I'm right I could: - use the original Blinn's idea modifing the vertex normals -> I'd need meshes with lots of polygons, because it's "per-vertex"; - use the hardware capability of modern graphics card with the new opengl multitexture features; - use fragment shaders. Since I've never studied shaders I'd like to try this way. I already have a class which contains a mesh and has a rendering method. Let's suppose to add two private attributes such as:

private bool bumpMapping;
private const char *bumpMap;

At runtime, I will have some objects which need Bump Mapping and will have the boolean = true. How would the rendering method change? Should I load the fragment shader when the application starts, and then in the rendering method do:

void Render()
{
    if(bumpMapping)
    {
        glPushAttrib(GL_CURRENT_BIT | GL_LIGHTING_BIT | GL_TEXTURE_BIT);
        cgGLBindProgram(...);
        cgGLEnableProfile(...);
        cgGLSetParameter3fv(...);
        cgGLSetParameter3fv(...);
        cgGLSetParameter3fv(...);
    }

    // Drawing...
    glBegin(...);
    ...
    glEnd();

    if(bumpMapping)
    {
        cgGLDisableProfile(...);
        glPopAttrib();
    }
}

Thanks for any advice :)
Advertisement
Yes. You also need to bind and enable/disable the texture for the bump map, using

cgGLSetTextureParameter(param, bumpMapID)

to set the texture, and

cgGLEnableTextureParameter( param ),

cgGLDisableTextureParameter( param )

TO enable/disable
I'm following this tutorial:
http://www.gamedev.net/reference/articles/article1903.asp
but there are a few things not clear..

for every vertex in the triangle {        // Bind the light vector to COLOR0 and interpolate        // it across the edge        glColor3f([lightx], [lighty], [lightz]);


The fragment shader needs the light vector as input, and here it seems it is passed with glColor.. is it something like a trick or there's a meaning I can't understand?
Moreover, the comment says "interpolate".. what does it mean? The shader will receive the light vector for each vertex and will it automatically interpolate this vector along each edge of the triangles?

About texture management you suggested the method
cgGLSetTextureParameter3fv
to bind the bump texture, but in that piece of code it isn't used, while there's:
glActiveTextureARB(GL_TEXTURE1_ARB);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, material.getNormalmapHandle());
glMultiTexCoord2fARB(.glMultiTexCoord2fARB(GL_TEXTURE0_ARB,
texture[Vector::X], texture[Vector::Y]);
Is it the same?
There's also this method:
glSecondaryColor3fvEXT
I looked it up with google and on the opengl website, but found nothing: isn't it commonly used? It isn't even present in the CG Reference Manual..
Sorry for the mess of arguments, but for example, regarding the question of the interpolation, where am I supposed to get information?
In Cg-2.2_April2009_ReferenceManual.pdf, pag. 614, there's a table about "VARYING INPUT SEMANTICS - Interpolated Input Semantics", but it's not really explained..
Thanks.
Quote:Original post by Luca D

The fragment shader needs the light vector as input, and here it seems it is passed with glColor.. is it something like a trick or there's a meaning I can't understand?
Moreover, the comment says "interpolate".. what does it mean? The shader will receive the light vector for each vertex and will it automatically interpolate this vector along each edge of the triangles?



Yes. If you know the light position and the position of the vertex, then you can compute the "light vector". Because you are using a pixel shader, you can use the color parameter for whatever you want. Specifying it as a color merely tells CG to pack the color values into the lightVector variable in the pixel shader.

The advantage of this method is that OpenGL automatically interpolates the light vector between vertices (so you don't have to), and it is probably faster.

What makes this an annoying method is that

(1) OpenGL may or may not clamp color input values to [0,1], meaning you have to pack the light vector so that all components are between 0 and 1 (this is what the tutorial does)

(2) You waste bandwidth with redundant data

My method would be to pass the light position (in your coordinate system) as a uniform value to a vertex shader in eye coordinates. You can then compute the light vector using the vertex shader for every vertex (after you transform it to eye coordinates). and pass it on in whatever variable you want (color/texture etc). This is almost certainly faster and reduces bandwidth.

Quote:Original post by Luca D

About texture management you suggested the method
cgGLSetTextureParameter3fv
to bind the bump texture, but in that piece of code it isn't used, while there's:
glActiveTextureARB(GL_TEXTURE1_ARB);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, material.getNormalmapHandle());
glMultiTexCoord2fARB(.glMultiTexCoord2fARB(GL_TEXTURE0_ARB,
texture[Vector::X], texture[Vector::Y]);
Is it the same?


Yes. I have never used this - I use settextureparameter, but they do the same thing

Quote:Original post by Luca D
There's also this method:
glSecondaryColor3fvEXT
I looked it up with google and on the opengl website, but found nothing: isn't it commonly used? It isn't even present in the CG Reference Manual..


glColor3f gets mapped to COLOR0 in the pixel shader. If you want to use COLOR1 etc, you need these functions to specify them (in the original OpenGL implementation no one conceived that you would ever need more than one color).

Quote:Original post by Luca D
Sorry for the mess of arguments, but for example, regarding the question of the interpolation, where am I supposed to get information?
In Cg-2.2_April2009_ReferenceManual.pdf, pag. 614, there's a table about "VARYING INPUT SEMANTICS - Interpolated Input Semantics", but it's not really explained..
Thanks.


I have never used these - I have always used the built in interpolation (which is what you need to do bumpmapping).

As far as number of vertices, you don't need that many - thats the beauty of interpolation
Ok, thanks for your answer, I'm working on what you suggested.
Studying shaders, I thought: OpenGL has only flat and gouroud illumination models (glShadeModel(GL_FLAT) and glShadeModel(GL_SMOOTH)), right?
Now, thanks to shaders, it should be very simple to realize the phong model. Is it commonly done? Does it considerably improve the visual quality?
I've gone through all the tutorial and code, but I'd need an help to implement bump mapping in my code.
The main idea of the tutorial is to:
- calculate the objectToTangentSpace matrix transformation for each vertex of the mesh when the model is loaded.
- in the draw method of the mesh, for each vertex, bring light and half vector into tangent space and then pass all the necessary information to the fragment shader with the methods glColor3fv, glSecondaryColor3fvEXT, glMultiTexCoord2fARB, etc..
Everything is done inside glBegin(GL_TRIANGLES) and glEnd().
My problem is that I'm drawing the model with:
glVertexPointer(...);
glNormalPointer(...);
glTexCoordPointer(...);
glDrawArrays(GL_TRIANGLES, ...);
How can I pass all those parameters, for each vertex, since I'm working with array pointers?
Hope it's clear,
thanks

This topic is closed to new replies.

Advertisement