Jump to content
  • Advertisement
Sign in to follow this  
trick

Entire screen turns white when switching to textures

This topic is 935 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am working to implement textures in my program. I have trimmed it down to display a single quad on-screen, with space around it showing background color black. In my texture class, I have used glGetError to make sure there was no problem with generating the texture, and I use glGetProgramiv to test my shaders.

When using colors in my fragment shader, everything seems to look just fine. I also know the data is packed correctly, because each triangle shows up as expected. The problem occurs as soon as I switch from getting my fragment color from the incoming vertex data to using a texture lookup. As soon as this change takes place, the entire screen goes white. I am using GLFW to handle window and input, and I know the program does not freeze because I can still use my exit key to close the program.
Here is my vertex and fragment shaders:
shader.vert
 

#version 430 core
layout(location = 0) in vec3 vPosition;
layout(location = 1) in vec2 vUV;
layout(location = 2) in vec3 vColor;
 
uniform mat4 MVP;
 
out vec4 color;
out vec2 uv;
 
void main() {
    gl_Position = MVP * vec4(vPosition, 1.0f);
 
    color = vec4(vColor, 1.0f);
    uv = vUV;
}

shader.frag
 

#version 430 core
in vec2 uv;
in vec4 color;
 
out vec4 fColor;
 
uniform sampler2D Sampler;
 
void main() {
//fColor = color;
fColor = texture(Sampler, UV);
}

zTexture.cpp
 

#include "zTexture.h"
 
zTexture::zTexture() {
 
}
 
bool zTexture::loadTexture(const char* fileName) {
    gli::texture Texture = gli::load(fileName);
    if (Texture.empty())
        return false;
 
    gli::gl GL(gli::gl::PROFILE_GL33);
    gli::gl::format const Format = GL.translate(Texture.format(), Texture.swizzles());
    GLenum Target = GL.translate(Texture.target());
 
    GLuint TextureName = 0;
    glGenTextures(1, &TextureName);
    GLenum e = glGetError();
    glBindTexture(Target, TextureName);
    e = glGetError();
    glTexParameteri(Target, GL_TEXTURE_BASE_LEVEL, 0);
    e = glGetError();
    glTexParameteri(Target, GL_TEXTURE_MAX_LEVEL, static_cast<GLint>(Texture.levels() - 1));
    e = glGetError();
    glTexParameteri(Target, GL_TEXTURE_SWIZZLE_R, Format.Swizzles[0]);
    e = glGetError();
    glTexParameteri(Target, GL_TEXTURE_SWIZZLE_G, Format.Swizzles[1]);
    e = glGetError();
    glTexParameteri(Target, GL_TEXTURE_SWIZZLE_B, Format.Swizzles[2]);
    e = glGetError();
    glTexParameteri(Target, GL_TEXTURE_SWIZZLE_A, Format.Swizzles[3]);
    e = glGetError();
    glTexParameterf(Target, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    e = glGetError();
    glTexParameterf(Target, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    e = glGetError();
 
    glm::tvec3<GLsizei> const Extent(Texture.extent());
    GLsizei const FaceTotal = static_cast<GLsizei>(Texture.layers() * Texture.faces());
 
    switch (Texture.target())
    {
    case gli::TARGET_1D:
        glTexStorage1D(Target, static_cast<GLint>(Texture.levels()), Format.Internal, Extent.x);
        e = glGetError();
        break;
    case gli::TARGET_1D_ARRAY:
    case gli::TARGET_2D:
    case gli::TARGET_CUBE:
        glTexStorage2D(Target, static_cast<GLint>(Texture.levels()), Format.Internal, Extent.x, Texture.target() == gli::TARGET_2D ? Extent.y : FaceTotal);
        e = glGetError();
        break;
    case gli::TARGET_2D_ARRAY:
    case gli::TARGET_3D:
    case gli::TARGET_CUBE_ARRAY:
        glTexStorage3D(Target, static_cast<GLint>(Texture.levels()), Format.Internal, Extent.x, Extent.y, Texture.target() == gli::TARGET_3D ? Extent.z : FaceTotal);
        e = glGetError();
        break;
    default:
        assert(0);
        break;
    }
 
    for (std::size_t Layer = 0; Layer < Texture.layers(); ++Layer){
        for (std::size_t Face = 0; Face < Texture.faces(); ++Face) {
            for (std::size_t Level = 0; Level < Texture.levels(); ++Level)
            {
                GLsizei const LayerGL = static_cast<GLsizei>(Layer);
                glm::tvec3<GLsizei> Extent(Texture.extent(Level));
                Target = gli::is_target_cube(Texture.target()) ? static_cast<GLenum>(GL_TEXTURE_CUBE_MAP_POSITIVE_X + Face) : Target;
 
                switch (Texture.target())
                {
                case gli::TARGET_1D:
                    if (gli::is_compressed(Texture.format())) {
                        glCompressedTexSubImage1D(Target, static_cast<GLint>(Level), 0, Extent.x, Format.Internal, static_cast<GLsizei>(Texture.size(Level)), Texture.data(Layer, Face, Level));
                        e = glGetError();
                    }
                    else {
                        glTexSubImage1D(Target, static_cast<GLint>(Level), 0, Extent.x, Format.External, Format.Type, Texture.data(Layer, Face, Level));
                        e = glGetError();
                    }
                    break;
                case gli::TARGET_1D_ARRAY:
                case gli::TARGET_2D:
                case gli::TARGET_CUBE:
                    if (gli::is_compressed(Texture.format())) {
                        glCompressedTexSubImage2D(Target, static_cast<GLint>(Level), 0, 0, Extent.x, Texture.target() == gli::TARGET_1D_ARRAY ? LayerGL : Extent.y, Format.Internal, static_cast<GLsizei>(Texture.size(Level)), Texture.data(Layer, Face, Level));
                        e = glGetError();
                    }
                    else {
                        glTexSubImage2D(Target, static_cast<GLint>(Level), 0, 0, Extent.x, Texture.target() == gli::TARGET_1D_ARRAY ? LayerGL : Extent.y, Format.External, Format.Type, Texture.data(Layer, Face, Level));
                        e = glGetError();
                    }
                    break;
                case gli::TARGET_2D_ARRAY:
                case gli::TARGET_3D:
                case gli::TARGET_CUBE_ARRAY:
                    if (gli::is_compressed(Texture.format())) {
                        glCompressedTexSubImage3D(Target, static_cast<GLint>(Level), 0, 0, 0, Extent.x, Extent.y, Texture.target() == gli::TARGET_3D ? Extent.z : LayerGL, Format.Internal, static_cast<GLsizei>(Texture.size(Level)), Texture.data(Layer, Face, Level));
                        e = glGetError();
                    }
                    else {
                        glTexSubImage3D(Target, static_cast<GLint>(Level), 0, 0, 0, Extent.x, Extent.y, Texture.target() == gli::TARGET_3D ? Extent.z : LayerGL, Format.External, Format.Type, Texture.data(Layer, Face, Level));
                        e = glGetError();
                    }
                    break;
                default:
                    assert(0);
                    break;
                }
            }
        }

    }
    e = glGetError();
    glBindTexture(Target, 0);
    e = glGetError();
    texture = TextureName;
    return true;
}
 
void zTexture::render(GLenum target){//, GLuint ID) {
    glActiveTexture(target);
    glBindTexture(GL_TEXTURE_2D, texture);
}
 
GLuint zTexture::getID() {
    return texture;
}

and lastly, my vertex data
 

    VertexData vertices[NumVertices] = {
        {{-1.0f, -1.0f, 0.0f}, {0.0f, 0.0f}, {1.0f, 0.0f, 0.0f}},
        {{1.0f, -1.0f, 0.0f}, {1.0f, 0.0f}, {0.0f, 0.0f, 1.0f}},
        {{-1.0f, 1.0f, 0.0f}, {0.0f, 1.0f}, {0.0f, 1.0f, 0.0f}},
        {{1.0f, 1.0f, 0.0f}, {1.0f, 1.0f}, {1.0f, 1.0f, 1.0f}}
    };
    GLuint indices[NumIndices] = { 0, 1, 2, 2, 1, 3 };
    glEnableVertexAttribArray(vPosition);
    glEnableVertexAttribArray(vUV);
    glEnableVertexAttribArray(vColor);
 
    glBindBuffer(GL_ARRAY_BUFFER, Buffers[ArrayBuffer]);
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, Buffers[IndexBuffer]);
    glVertexAttribPointer(vPosition, 3, GL_FLOAT, GL_FALSE, sizeof(VertexData), (void*)0);
    glVertexAttribPointer(vUV, 2, GL_FLOAT, GL_FALSE, sizeof(VertexData), (void*)12);
    glVertexAttribPointer(vColor, 3, GL_FLOAT, GL_FALSE, sizeof(VertexData), (void*)20);
    myTex.render(GL_TEXTURE0);
    glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
 
    glDisableVertexAttribArray(vPosition);
    glDisableVertexAttribArray(vUV);
    glDisableVertexAttribArray(vColor);

Honestly I don't know what other code is pertinent, as everything works fine until I switch my fragment shader from using the incoming color to using texture sampling. I appreciate any help!!

Share this post


Link to post
Share on other sites
Advertisement

It's been a year or two since I've wrote pure OpenGL like this, so please forgive me if I'm wrong. But depending on what OpenGL profile you're using, you may need to call glEnable(GL_TEXTURE_2D) after binding your texture (or maybe before? or after you set the current texture unit). Also I would make sure your textures are in powers of two. Some GPU's don't like textures that are not in powers of two.

Share this post


Link to post
Share on other sites

Thanks for the response.
While my code isn't doing anything with the error checking, I tested value of e through debug.
The problem is now identified, it was in my fragment shader.  The incoming UV was saved to a vec2 called "uv" but my sampler line called out "UV".  Came down to a case of case-sensitivity.

Share this post


Link to post
Share on other sites

It always seems to be the littlest of bugs when it comes to programming! Happy to hear you've fixed it though!  :D

 

One thing that may help prevent issues like this is to follow a strict syntax, and to use CamelCase when naming variables / classes / etc. I typically only capitalize the first letter of ClassName's, lowercase variableName's, and completely capitalize CONST_NAMES (or #defines).

 

With that said, here's a small benefit to following CamelCase. You could call your texture class "Texture" instead of "zTexture" and then when creating an instance of it you would just call "Texture texture = new Texture(). Then if the Texture class has any static functions you could use this syntax "Texture::myStaticFunction()". People reading your code may see a variable named "Texture" and think it refers to the class itself opposed to the variable name you've associated with that instance.

 

Hope you find that useful! Best of luck! :)

Edited by xDarkShadowKnightx

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!