k.pedersen

Members
  • Content count

    14
  • Joined

  • Last visited

Community Reputation

100 Neutral

About k.pedersen

  • Rank
    Member
  1. Hello, Just checking if a function like [b]glVertexAttrib3f()[/b] is meant to set the default value for an [b]in[/b] attribute for a shader. For example, if I am drawing a cube which doesnt actually have texture coordinates but my shader usually supports them, would I have to have multiple shaders or could I do something like the following... [code] glVertexAttrib3f(inTexCoords, 1, 2, 3); [/code] And expect the attribute in my shader to have the vec3 value of 1, 2, 3? So far it just seems to not draw anything, so I am wondering if this is correct functionaility or if I have a bug somewhere... Cheers, Karsten
  2. Hmm, I dont think the code is quite right yet... The following is basically what I use. [code] glGenBuffersARB(1, &texCoordBuffer); glBindBufferARB(GL_ARRAY_BUFFER_ARB, texCoordBuffer); glBufferDataARB(GL_ARRAY_BUFFER_ARB, texCoords.size()*sizeof(float), &texCoords[0], GL_STATIC_DRAW_ARB); glVertexAttribPointerARB(inTexCoord, 2, GL_FLOAT, GL_FALSE, 0, 0); glEnableVertexAttribArrayARB(inTexCoord); glGenBuffersARB(1, &colorBuffer); glBindBufferARB(GL_ARRAY_BUFFER_ARB, colorBuffer); glBufferDataARB(GL_ARRAY_BUFFER_ARB, colors.size()*sizeof(float), &colors[0], GL_STATIC_DRAW_ARB); glVertexAttribPointerARB(inVColor, 4, GL_FLOAT, GL_FALSE, 0, 0); glEnableVertexAttribArrayARB(inVColor); glGenBuffersARB(1, &normalBuffer); glBindBufferARB(GL_ARRAY_BUFFER_ARB, normalBuffer); glBufferDataARB(GL_ARRAY_BUFFER_ARB, normals.size()*sizeof(float), &normals[0], GL_STATIC_DRAW_ARB); glVertexAttribPointerARB(inNormal, 3, GL_FLOAT, GL_FALSE, 0, 0); glEnableVertexAttribArrayARB(inNormal); glGenBuffersARB(1, &vertexBuffer); glBindBufferARB(GL_ARRAY_BUFFER_ARB, vertexBuffer); glBufferDataARB(GL_ARRAY_BUFFER_ARB, vertices.size()*sizeof(float), &vertices[0], GL_STATIC_DRAW_ARB); glVertexAttribPointerARB(inVertex, 3, GL_FLOAT, GL_FALSE, 0, 0); glEnableVertexAttribArrayARB(inVertex); glDrawArrays(GL_TRIANGLES, 0, vertices.size() / 3); [/code] So.. 1) Generate the buffer 2) Bind the buffer 3) Set the bound buffer data 4) Set the shader attribute to point to the bound buffer 5) Enable the shader attribute rather than using the default value (i.e the one set with glVertexAttrib3f)
  3. It should be:- [code] //Model Vertices cordinates glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glVertexAttribPointer(0, 0, GL_INT, GL_FALSE,0,0); //Model Texture cordinates glBindBuffer(GL_ARRAY_BUFFER, texcoordbuffer); glVertexAttribPointer(1, 0, GL_FLOAT, GL_FALSE,0,0); [/code] Also... I see another issue. [code] glDrawElements( GL_QUADS, faces.size(), GL_UNSIGNED_INT, reinterpret_cast<GLvoid*>(&faces[0])); [/code] should be glDrawArrays. Afterall, you have added your data to a VBO, so why are you now drawing the data from faces[]? That is also why it is displaying incorrectly, because faces[] contains the data for vertexes and tex coords merged into one. So yeah, have a look at: [url="http://www.opengl.org/sdk/docs/man/xhtml/glDrawArrays.xml"]http://www.opengl.or...lDrawArrays.xml[/url] (cant remember the exact code myself, but let me know if you have any issues) [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]
  4. Hi, Not entirely sure (there was quite a lot of code in that pastebin) but... [code] glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); //Model Vertices cordinates glVertexAttribPointer(0, 0, GL_INT, GL_FALSE,0,0); //Model Texture cordinates glVertexAttribPointer(1, 0, GL_FLOAT, GL_FALSE,0,0); [/code] I think you need to bind the buffer containing the texture before setting glVertexAttribPointer for the texture coordinates.
  5. If you don't need boned animation and you are using the fixed function pipeline in OpenGL, you might want to give the following free and open-source Obj Wavefront model library a try. [url="http://public.sanguinelabs.co.uk/expose/product.php?id=libwavefront"]http://public.sangui...id=libwavefront[/url] It can read in basic animation files and simply transforms and rotates the parts making up the model (if it has been exported with objects or groups). I wrote it for my University dissertation and it tends to be good enough for the majority of my projects so it might help you too if you do decide to go with Wavefront Obj.
  6. True, the specs of the hardware do confirm that it should be able to work fine with the shader, they even mention the hardware perspective correction. So yeah, it is probably a Linux driver issue. I did try out the software using the same version of Linux but on an nvidia card (using the open-source nouveau) driver and it worked fine. So it seems to be an issue with the intel drivers and my card. This is a bit of a pain because I have spent so long on this issue lol.
  7. The following is the texture coordinate related code that I use. I have missed off the vertex and normal stuff so not to spam everyone with lots of code. Does this appear to be correct? Defines [code] GLuint coodBuffer; GLuint inTexCoord; std::vector<float> coords; [/code] Load [code] inTexCoord = glGetAttribLocation(shaderProgram, "in_tex_coord"); glGenBuffersARB(1, &coordBuffer); for(int i = 0; faces.size(); i++) { coords.push_back(faces.at(i)->getA()->getU()); coords.push_back(faces.at(i)->getA()->getV()); coords.push_back(faces.at(i)->getB()->getU()); coords.push_back(faces.at(i)->getB()->getV()); coords.push_back(faces.at(i)->getC()->getU()); coords.push_back(faces.at(i)->getC()->getV()); } glBindBufferARB(GL_ARRAY_BUFFER_ARB, coordBuffer); glBufferDataARB(GL_ARRAY_BUFFER_ARB, coords.size() * sizeof(float), &coords[0], GL_STATIC_DRAW_ARB); [/code] Draw [code] glEnableVertexAttribArray(inTexCoord); glBindBuffer(GL_ARRAY_BUFFER_ARB, coordBuffer); glVertexAttribPointerARB(inTexCoord, 2, GL_FLOAT, GL_FALSE, 0, 0); glDrawArrays(GL_TRIANGLES, 0, faces.size() * 3); glDisableVertexAttribArray(inTexCoord); [/code] I have also attempted to use [code] glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST); [/code] But it makes no difference. I assume this hint only has an effect when using fixed functionality? Edit: From Madhed's software rendering suggestion, I tried it on a different machine (with newer gfx card) and it all works perfectly. I noticed that it ran slower using shaders than just using the fixed function pipeline suggesting that it is indeed doing some sort of software emulation. The graphics card in question is an intel GMA 945 running on Fedora 15's Xorg. Hmm, I guess I am forced to stick with the fixed function pipeline with this card? So much for compatibility... lol Thank you everyone for your help. Really appreciated! I have marked the topic as solved.
  8. Hello, I hate to bump my own topics but I still cannot seem to find a solution to this. All the example code I have found on the internet uses deprecated methods or just plain ignores the issue. I guess the simple question is, do I need to manipulate the UV coordinates in the vertex shader before sending them through to the fragment shader? Best Regards, Karsten
  9. I went with GLSL 1.20 to ensure compatibility.. Same reason why I am looking to use stuff compatible with the 4.0 core profile and not use deprecated functionality [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]. This is a screenshot of the tex coordinates as a color. (The last one was the result of my fiddling). :/ So looking at this screenshot, my texture coordinates look spot on, but I am still getting that strange perspective warp. I have also noticed that as I look around the scene, even the colors seem to warp and change. This seems to suggest the perspective issue again :/ Do you notice the slight indent in the color where the odd texture split would be?
  10. Hello MarkS, Thanks for the info and this probably does work but is unfortunately no longer the correct way to do this. This uses the fixed function pipeline in OpenGL such as the ftransform() and is also using the built in and deprecated matrix system provided by OpenGL. The way the texture coordinates are passed in also uses the gl_TexCoord[0] keyword which is also deprecated but also will not work on the OpenGL ES 2.0 or 4.0 Core profiles. Though AFAIK those texture coordinates are probably still not perspective correct either. I cannot test this because I am using the GLM matrix code rather than the OpenGL provided one and thus ftransform will not work.
  11. I have attached the result. (Edit: Submitted new screenshot) Tbh, I wasn't expecting the cat model or the brick platform model to be completely red. However, I also have exactly the same problem with the skybox and as you can see in the screenshot, that looks about what I would expect. The cat model gets it's texture from a single unwrapped texture sheet, the brick platform model gets its coordinates from gtkradiant. All 3 models (cat, bricks, skybox) seems to work fine when using no shader at all (fixed function) so I really don't think there is anything wrong with my texture coordinates. However I am really not sure why 2 of them are showing up as red. The cat's texture coordinates are usually between -1 and 1 The brick platform's texture coordinates can usually be between -10 and 10. Could this have anything to do with it? Again, both these models also work with Blender too. When I increase the size of the cat (like glScale) it also begins to get the strange coordinate skew (i.e when the faces become large enough to notice).
  12. Here is a simplified version of my vertex shader. [code] #version 120 uniform mat4 in_projection; uniform mat4 in_modelview; attribute vec2 in_tex_coord; attribute vec3 in_position; varying vec2 pass_tex_coord; void main() { gl_Position = in_projection * in_modelview * vec4(in_position, 1); pass_tex_coord = in_tex_coord; } [/code] All I am really doing is passing through the texture coordinates.
  13. Hello NumberXaero, Yes, I can confirm that the brick texture draws correctly when using the fixed function shader. Also, my same code is loading the brick structure as the cat, so I know the texture coordinates are not the issue here. Any other suggestions to this?
  14. Hi, I have been attempting to replace all fixed functionality from an OpenGL project I wrote a while ago and have run into a snag which I havn't been able to resolve in the many hours I have spent on it. After much research I have found it is due to the texture coordinates not being perspective correct. What I have not been able to find is a way to fix this in the shader. (I have attached a screenshot demonstrating the issue and as you can see, the brick texture is not drawing correctly, creating a seam.) The line of code in my frag shader which I believe to be causing the issue is as follows. [code] #version 120 uniform sampler2D tex; varying vec2 pass_tex_coord; ... gl_FragColor = texture2D(tex, pass_tex_coord); [/code] So far I am under the impression that I need to obtain the w value from the pass_tex_coord s and t values. I am unsure of how to do this because I am simply passing in the uv texture coordinates from my model (VBO). I have also noticed that the model of the cat has the correct texture coord mapping (because the triangles are smaller?) If anyone could help shed some light on this or offer some example shader code to do this, I would be very greatful! Best Regards, Karsten