Sign in to follow this  
redru

OpenGL Texture loaded from .png is not correctly drawn

Recommended Posts

redru    120

This is my first post, so Hello Everyone.

 

I've got a strange problem drawing a texture in a Android Project, using OpenGL ES 3.0. This is the resulting draw:

 

[attachment=25716:Screenshot_2015-01-29-23-41-08.png]

 

This is when I set 'GLES30.GL_TEXTURE_WRAP_S':

 

[attachment=25715:Screenshot_2015-01-29-23-34-52.png]

 

And this is the image:

 

[attachment=25717:tex_b2spirit.png]

 

This is my method setup(), where I create VBOs and VAOs:


public void setup() {
    StringBuilder str = new StringBuilder();
   
    for (int i = 0, x = 0, y = 0; i < evoObj.getUnifiedData().length; i++) {
    str.append(evoObj.getUnifiedData()[i] + ", ");
    if (x % 7 == 0 && x != 0) {
    Log.i(TAG, "UNIFIED DATA " + y + ": " + str);
    str.delete(0, str.length());
    x = 0;
    y++;
    } else {
    x++;
    }
    }
   
    Log.i(TAG, "UNIFIED DATA: LOGGED");
   
   
        Log.i(TAG, "Buffers setup: start.");
        // initialize vertex byte buffer for shape coordinates------------------------------------
        vertexBuffer = ByteBuffer.allocateDirect(evoObj.getUnifiedData().length * OpenGLConstants.BYTES_PER_FLOAT)
                .order(ByteOrder.nativeOrder()).asFloatBuffer();
        vertexBuffer.put(evoObj.getUnifiedData()).position(0);
        
        textureBuffer = ByteBuffer.allocateDirect(evoObj.getTexture().getBitmap().getByteCount())
        .order(ByteOrder.nativeOrder());
        textureBuffer.put(evoObj.getTexture().getTextureData()).position(0);
        //VERTEX DATA CONFIGURATION------------------------------------------------------------------
        GLES30.glGenBuffers(1, VBOIds, 0);
 
        GLES30.glBindBuffer(GLES30.GL_ARRAY_BUFFER, VBOIds[0]);
        GLES30.glBufferData(GLES30.GL_ARRAY_BUFFER, evoObj.getUnifiedData().length * OpenGLConstants.BYTES_PER_FLOAT,
                vertexBuffer, GLES30.GL_STATIC_DRAW);
        //TEXTURE CONFIGURATION----------------------------------------------------------------------
        GLES30.glGenTextures(1, textureId, 0);
        
        GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, textureId[0]);
        GLES30.glTexImage2D(GLES30.GL_TEXTURE_2D, 0, GLES30.GL_RGB, 1024, 256, 0, GLES30.GL_RGB, GLES30.GL_UNSIGNED_BYTE, textureBuffer);
        
        GLES30.glTexParameteri(GLES30.GL_TEXTURE_2D, GLES30.GL_TEXTURE_MIN_FILTER, GLES30.GL_LINEAR);
        GLES30.glTexParameteri(GLES30.GL_TEXTURE_2D, GLES30.GL_TEXTURE_MAG_FILTER, GLES30.GL_LINEAR);
//        GLES30.glTexParameteri(GLES30.GL_TEXTURE_2D, GLES30.GL_TEXTURE_WRAP_S, GLES30.GL_CLAMP_TO_EDGE);
//        GLES30.glTexParameteri(GLES30.GL_TEXTURE_2D, GLES30.GL_TEXTURE_WRAP_T, GLES30.GL_CLAMP_TO_EDGE);        
      //----------------------------------------------------------------------------------------------
        // Vertex Array Object (VAO) configuration
        GLES30.glGenVertexArrays(1, VAOIds, 0);
        GLES30.glBindVertexArray(VAOIds[0]);
 
        // TEXTURES
        GLES30.glActiveTexture(GLES30.GL_TEXTURE0);
        GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, textureId[0]);
        GLES30.glUniform1i(ShaderFactory.getInstance().SAMPLER_S_TEXTURE, 0);
        // TEXTURES END
        
        GLES30.glBindBuffer(GLES30.GL_ARRAY_BUFFER, VBOIds[0]);
        
        GLES30.glEnableVertexAttribArray(ShaderFactory.getInstance().LAYOUT_VERTEX);
        GLES30.glEnableVertexAttribArray(ShaderFactory.getInstance().LAYOUT_TEXTURE);
 
        GLES30.glVertexAttribPointer(ShaderFactory.getInstance().LAYOUT_VERTEX, 3,
                GLES30.GL_FLOAT, false, 8 * OpenGLConstants.BYTES_PER_FLOAT, 0);
 
        GLES30.glVertexAttribPointer(ShaderFactory.getInstance().LAYOUT_TEXTURE, 2,
                GLES30.GL_FLOAT, false, 8 * OpenGLConstants.BYTES_PER_FLOAT, 3 * OpenGLConstants.BYTES_PER_FLOAT);
        
        GLES30.glBindVertexArray(0);
        //----------------------------------------------------------------------------------------
        Log.i(TAG, "Buffers setup: complete.");
    }

This is my draw() method: 

public void draw() {
        GLES30.glUseProgram(ShaderFactory.getInstance().complexObjectProgram);

        // Load the MVP matrix
        GLES30.glUniformMatrix4fv(ShaderFactory.getInstance().MVP_LOC, 1, false,
                Camera.getInstance().getMvpMatrixAsFloatBuffer());

        // Bind this object Vertex Array Object (VAO) state and then draw the object
        GLES30.glBindVertexArray(VAOIds[0]);
        
        GLES30.glDrawArrays(GLES30.GL_TRIANGLES, 0, evoObj.getUnifiedData().length);

        GLES30.glBindVertexArray(0);
    }

The vertexAttribPointer uses a buffer with this structure: [V1][V2][V3][T1][T2][VN1][VN2][VN3]. I am pretty sure that the data passed to the buffer is correct, because I have already debugged it. I have the problem with the texture loaded from image, cause drawing colors instead of texture, does his job correctly.

 

I do not expect a response like 'your fault is this line'. I am asking if you know how to handle this kind of errors, because I cannot understand where I have to look (maybe is the buffer loaded partially? or maybe the image is not loaded correctly? can the shader be wrong? ).

 

**********EDIT 1***********

 

Hello, sorry about not posting all needed code, it was only for not show you useless things. Here it goes:

 

Here I am getting all the information from the file, passed as a String[]:

private EvoObj wrap(String[] lines, String name) {
        String[] lineParts;
        String[] indicesParts;

        ArrayList<Float> positions = new ArrayList<Float>();
        ArrayList<Float> textures = new ArrayList<Float>();
        ArrayList<Float> normals = new ArrayList<Float>();
        ArrayList<short[]> indices = new ArrayList<short[]>();
        
        for (int index = 0; index < lines.length; index++) {

            if (lines[index].startsWith("v ")) {
            	lines[index] = lines[index].substring(2);
                lineParts = lines[index].split(" ");

                for (int i = 0; i < OpenGLConstants.SINGLE_V_SIZE; i++) {
                    positions.add(Float.parseFloat(lineParts[i]));
                }

            } else if (lines[index].startsWith("vt ")) {
            	lines[index] = lines[index].substring(3);
                lineParts = lines[index].split(" ");

                for (int i = 0; i < OpenGLConstants.SINGLE_VT_SIZE; i++) {
                	textures.add(Float.parseFloat(lineParts[i]));
                }

            } else if (lines[index].startsWith("vn ")) {
            	lines[index] = lines[index].substring(3);
                lineParts = lines[index].split(" ");

                for (int i = 0; i < OpenGLConstants.SINGLE_VN_SIZE; i++) {
                	normals.add(Float.parseFloat(lineParts[i]));
                }
                
            } else if (lines[index].startsWith("f ")) {
                lines[index] = lines[index].substring(2);
                lineParts = lines[index].split(" ");

                for (int i = 0; i < OpenGLConstants.SINGLE_F_SIZE; i++) {
                    indicesParts = lineParts[i].split("/");

                    if (indicesParts.length > 0) {
                    	short[] tmp = new short[3];
                    	
                        for (int e = 0; e < OpenGLConstants.SINGLE_F_INDICES_SIZE; e++) {
                            tmp[e] = Short.parseShort(indicesParts[e]);
                        }
                        
                        indices.add(tmp);
                    }

                }

            }

        }
        
        float[] positionsTmp = new float[positions.size()];
        for (int i = 0; i < positions.size(); i++) {
        	positionsTmp[i] = positions.get(i);
//        	Log.i(TAG, "Position " + i + ": " + positionsTmp[i]);
        }
        
        float[] texturesTmp = new float[textures.size()];
        for (int i = 0; i < textures.size(); i++) {
        	texturesTmp[i] = textures.get(i);
//        	Log.i(TAG, "Texture " + i + ": " + texturesTmp[i]);
        }
        
        float[] normalsTmp = new float[normals.size()];
        for (int i = 0; i < normals.size(); i++) {
        	normalsTmp[i] = normals.get(i);
//        	Log.i(TAG, "Normal " + i + ": " + normalsTmp[i]);
        }
        
        short[][] indicesTmp = new short[indices.size()][indices.get(0).length];
        for (int i = 0; i < indices.size(); i++) {
        	for (int e = 0; e < indices.get(0).length; e++) {
        		indicesTmp[i][e] = (short) (indices.get(i)[e] - 1);
//        		Log.i(TAG, "Indices " + i + ": " + indicesTmp[i][e]);
        	}
        }
        
        float[] unifiedData = OpenGLUtils.generateUnifiedData(positionsTmp, texturesTmp, normalsTmp, indicesTmp);
        EvoObj obj  = new EvoObj(positionsTmp, texturesTmp, normalsTmp, unifiedData, name);

        return obj;
    }

Here I unify all data into a single array, parsing the index data:

public static float[] generateUnifiedData(float[] positions, float[] textures, float[] normals, short[][] indices) {
		float[] tmp = new float[(OpenGLConstants.SINGLE_V_SIZE + OpenGLConstants.SINGLE_VT_SIZE + OpenGLConstants.SINGLE_VN_SIZE) * indices.length];
		
		for (int i = 0, x = 0; i < indices.length; i++) {
			for (int e = 0; e < indices[i].length; e++) {
				if (e == 0) {
					for (int w = 0; w < OpenGLConstants.SINGLE_V_SIZE; w++) {
						tmp[x] = positions[indices[i][e] * OpenGLConstants.SINGLE_V_SIZE + w];
						x++;
					}
				} else if (e == 1) {
					for (int w = 0; w < OpenGLConstants.SINGLE_VT_SIZE; w++) {
						tmp[x] = textures[indices[i][e] * OpenGLConstants.SINGLE_VT_SIZE + w];
						x++;
					}
				} else if (e == 2) {
					for (int w = 0; w < OpenGLConstants.SINGLE_VN_SIZE; w++) {
						tmp[x] = normals[indices[i][e] * OpenGLConstants.SINGLE_VN_SIZE + w];
						x++;
					}
				}
			}
			
			Log.i(TAG, "i: " + i + " / x: " + x);
		}
		
		return tmp;
	}

Here I am loading the bitmap from the file .png:

public static Texture importTexture(Context context, int resourceId) {
    	Texture tex = null;
    	
    	InputStream is = Redru.getContext().getResources().openRawResource(R.raw.tex_b2spirit);
    	Bitmap bitmap = BitmapFactory.decodeStream(is);
    	
    	tex = new Texture("Spirit texture");
    	tex.setBitmap(bitmap);
    	
    	/* UNUSED
    	ByteArrayOutputStream stream = new ByteArrayOutputStream();
    	bitmap.compress(Bitmap.CompressFormat.PNG, 100, stream);
    	
    	tex.setTextureData(stream.toByteArray());*/
    	
    	try {
			is.close();
		} catch (IOException e) {
			e.printStackTrace();
		}
    	
    	return tex;
    }

At this links you can find the log, obj and shaders files:

 

http://www.ibizing.es/images/log.txt

 

http://www.ibizing.es/images/obj_b2spirit.txt

 

http://www.ibizing.es/images/shader_fragment_complex_object.txt

 

http://www.ibizing.es/images/shader_vertex_complex_object.txt

 

PS: the .obj is not mine, I downloaded from internet to learn OpenGL.

 

 

 

Thank you everyone,

Luca

Share this post


Link to post
Share on other sites
L. Spiro    25638

maybe the image is not loaded correctly?

That’s pretty hard to tell since you didn’t post anything related to the loading of the texture. Yes, it can be wrong.

can the shader be wrong?

That’s pretty hard to tell since you didn’t post anything related to shaders. Yes, it can be wrong.
 

I am asking if you know how to handle this kind of errors, because I cannot understand where I have to look

Look at the results on your screen.

#1: The colors you see on the object are not present in the original image. This means it is either loaded incorrectly or for any other reason you have uploaded junk to the texture slot for the shader to read. If the shader samples from a texture slot that is not bound by a texture, it will return black, so we can assume you have set the sampler state correctly and bound a texture (although why you would leave it bound while creating the VBO instead of binding it when drawing is beyond me).
The pattern of the image is not the same in both images, which further suggests the texture is nothing but random junk taken from CPU-side RAM.

You didn’t explain what modes you were using in the first shot, so I can only guess it was the default GL_REPEAT.
There is a maximum number of repeats the hardware can do, which might explain why it becomes black after a while.
You didn’t post any shaders so I can’t tell if you are passing UV’s through correctly.

If your texturing looks weird, one of the first things you do is print the UV coordinates so you can verify them.
 

The vertexAttribPointer uses a buffer with this structure: [V1][V2][V3][T1][T2][VN1][VN2][VN3].

No it’s not.
It’s [V1][V2][V3][T1][T2][X][X][X] according to your code.


L. Spiro Edited by L. Spiro

Share this post


Link to post
Share on other sites
cgrant    1827

From looking at your post, the issue is most likely related to how the image is being loaded, but like mentioned before you did not post anything to that end, since the assumption is the texcoords are incorrect. Incorrect texture coordinate would at least give you colors that are actually in the 'correct' image, but your post seem to show what appears to be 'noise'.
How is the texture being loaded  ?

Share this post


Link to post
Share on other sites
redru    120

Finally I solved this problem, after several days of looking and following your suggestions:

 

I have replaced my method for loading the texture with this:

     final BitmapFactory.Options options = new BitmapFactory.Options();
     options.inScaled = false;
     final Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), resourceId, options);
 
After, I have changed the method to generate texture. I have replaced this:
 
//        textureBuffer = ByteBuffer.allocateDirect(evoObj.getTexture().getBitmap().getByteCount())
//        		.order(ByteOrder.nativeOrder());
//        textureBuffer.put(evoObj.getTexture().getTextureData()).position(0);


// ..................................

//        GLES30.glTexImage2D(GLES30.GL_TEXTURE_2D, 0, GLES30.GL_RGB, 1024, 256, 0, GLES30.GL_RGB, GLES30.GL_UNSIGNED_BYTE, textureBuffer);

with this:

GLUtils.texImage2D(GLES30.GL_TEXTURE_2D, 0, evoObj.getTexture().getBitmap(), 0);

This is doing his work. After that I will try to buffer the data and generate it, like I was doing before. But is clear that my first code was wrong at loading, buffering or generating the texture.

 

Thank you all.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By pseudomarvin
      I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes
      void main() { float x = 0; float y = 0; int sum = 0; for (float x = 0; x < 10; x += 0.00005) { for (float y = 0; y < 10; y += 0.00005) { sum++; } } fragColor = vec4(1, 1, 1 , 1.0); } with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks.
    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
  • Popular Now