Jump to content
  • Advertisement
Sign in to follow this  
npostavs

OpenGL OpenGL: glScale(1,1,1) + lighting screws up quad rendering!?

This topic is 3245 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm getting a very strange bug in my OpenGL program:
#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/glut.h>
#include <stdio.h>
#include <stdlib.h>

void draw(void)
{
    GLenum e;

    glClear(GL_COLOR_BUFFER_BIT);
    glMatrixMode(GL_MODELVIEW);
    glLoadIdentity();
    glTranslatef(0, 0, -5);

    /* no problem without call to scale */
    if (1) glScalef(1, 1, 1);
    glColor3f(1, 1, 1);
    glBegin(GL_QUADS);
    glNormal3f(0, 0, 1);
    glVertex2f(-.5, .5);
    glVertex2f(-.5, -.5);
    glVertex2f(.5, -.5);
    glVertex2f(.5, .5);
    glEnd();

    glFinish();
    e = glGetError();
    if (e != GL_NO_ERROR) {
        fprintf(stderr, "OpenGL: %s\n", gluErrorString(e));
        exit(1);
    }
}

#define W 400
#define H 400
void setup_gl(void)
{
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    glViewport(0, 0, W, H);
    gluPerspective(40.0, (GLfloat)W/(GLfloat)H, 0.1, 1000.0);

    glEnable(GL_LIGHTING);
    glEnable(GL_LIGHT0);

    glShadeModel(GL_FLAT);
}


/* toolkit specific */

static void display(void)
{
    draw();
    glutSwapBuffers();
}

int main(int argc, char **argv)
{
    glutInit(&argc, argv);
    glutInitDisplayMode(GLUT_DOUBLE | GLUT_DEPTH);
    glutInitWindowSize(W, H);
    glutCreateWindow ("gl window");


    glutDisplayFunc(&display);
    setup_gl();

    glutMainLoop();
    return 0;
}

It should draw a square in the middle of the screen (expected-quad.png), and it does when glScalef(1,1,1) (values other than 1 also cause this, with 1 it's obviously a noop) isn't called, or when lighting is not enabled, but with both of those the square looks rotated and stretched (broken-quad.png). I don't know where the problem is: my program, opengl implementation, graphics card (lspci gives: Intel Corporation 82815 Chipset Graphics Controller)? I've found the same problem using glx and sdl, so I know it's not the toolkit. Any insights are appreciated. broken: broken-quad.png expected: expected-quad.png I didn't get any response to this on linuxquestions.org, I hope going to an OpenGL specific board will get more response.

Share this post


Link to post
Share on other sites
Advertisement
Wrong matrix order. Matrices are applied in the order they are defined. In your case you first translate and then scale. This is not what you want. You want first scale then translate. Otherwise you scale along the world origin.

Share this post


Link to post
Share on other sites
That code posted is correct. The bug is with 99.9% probability in the Intel's OpenGL implementation. Intel has been always struggling with OpenGL support, even for newer cards (GMA 9xx, GMA Xxxx), 82815 chipset is some 9 years old, so don't expect any fix. Updating drivers *may* help, assuming there are still any.

glScale/Translate etc. just manipulate the matrix, so you may try to workaround this problem by doing your own matrix calculations (and then loading the matrix with glLoadMatrix*), or using library like GLM that does it for you. Whether this will help remains to be seen :)

Share this post


Link to post
Share on other sites
Quote:
Original post by RPTD
Wrong matrix order. Matrices are applied in the order they are defined. In your case you first translate and then scale. This is not what you want. You want first scale then translate. Otherwise you scale along the world origin.

The opposite.
The transpose matrices are applied in that order in openGL.

So you have to apply the operations (not transposed) in reverse order compared to real life.

So scale with center of 0,0, than translate (so it will be just like in the OP's image) means first translation then scaling in openGL.

This way you can build transformation hierarchies.

I don't see no problem with the code.
Just a side-note: scale affects the normals too, so lighting will be incorrect, but that's not the problem.

Share this post


Link to post
Share on other sites
I thought it's the other way round. Whatever the case I always do my own matrix calculation and load the matrix as this is safer.

Share this post


Link to post
Share on other sites
I am going to guess at this but if you call glScale on vertices you need to recalculate the normals as they are scaled also or else call

Follow these rules in general : for normalizing
- if you never call glScale, do not enable any normalizing,
- if you use uniform scaling (eg glScale(2,2,2)) then call glEnable(GL_RESCALE_NORMAL);
- if you use non-uniform scaling (eg glScale(1,4,2)) then call glEnable(GL_NORMALIZE);

Share this post


Link to post
Share on other sites
Quote:
That code posted is correct. The bug is with 99.9% probability in the Intel's OpenGL implementation.

Good to know I'm not crazy. :)

Matrix order or normals shouldn't matter since I scaled by 1, ie: a noop.

Quote:
Otherwise you scale along the world origin.

I don't follow what scaling along the world origin would mean; is it not the case that translations don't affect scales (though scales do affect translations)?

Another bit of wierdness: adding a call to glRotatef(FLT_MIN, 1, 0, 0) (has to be at least FLT_MIN, not 0) gives the expected output.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!