Can't find the cause of "invalid enumerant" error

Started by
2 comments, last by Kalidor 16 years, 6 months ago
I managed to isolate the problem to this function, which creates a white texture of the requested size.
bool Texture::blank_image( GLuint w, GLuint h )
{
    GLuint buf[ w * h ];
    
    imgW = w;
    imgH = h;
    
    texW = w;
    texH = h;
    
    for( int i = 0; i < w * h; i++ )
    {
        buf = 0xFFFFFFFF;
    }
    
    glGenTextures( <span class="cpp-number">1</span>, &amp;texture );
        
    glBindTexture( GL_TEXTURE_2D, texture );

    glTexImage2D( GL_TEXTURE_2D, <span class="cpp-number">0</span>, <span class="cpp-number">32</span>, w, h, <span class="cpp-number">0</span>, GL_RGBA, GL_UNSIGNED_BYTE, buf );
        
    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST ); 
    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST ); 
    
    <span class="cpp-keyword">return</span> <span class="cpp-keyword">true</span>;
}


</pre></div><!–ENDSCRIPT–>

It's pretty much identical to my image loading function except for the source of the pixel data. I have no idea what's causing it.

Learn to make games with my SDL 2 Tutorials

Advertisement
Hmm... I'm not entirely confident of this, but is "32" a valid value for parameter three of glTexImage2D? Should it not be "GL_RGBA" instead?

MWAHAHAHAHAHAHA!!!

My Twitter Account: @EbornIan

...well that was embarrassing.

Actually it's supposed to be 4. I forgot that it's bytes, not bits per pixel.

Learn to make games with my SDL 2 Tutorials

Quote:Original post by Lazy Foo
...well that was embarrassing.

Actually it's supposed to be 4. I forgot that it's bytes, not bits per pixel.
Actually it's neither. It's the internal format that OpenGL should store the texture data in. It can accept either the number of channels (1, 2, 3, or 4) or one of several constants. You should prefer to use the constants instead because 1-4 are only supported for backwards compatibility with OpenGL 1.0. You should also use the constant that describes what you want in the most detail. So if you want a 32-bit RGBA texture, use GL_RGBA8 instead of GL_RGBA.

This topic is closed to new replies.

Advertisement