• Advertisement
Sign in to follow this  

OpenGL OpenGL 3.3 VBOs and VAOs.. Why do my models not render properly?

This topic is 1899 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Okay, I need some wisdom from this wonderful place, thankyou please!

Here's the deal: I wrote (based on Ranger_Ones method) a loader for .obj files. It works well! I am happy. However, we live in a world of 3.3 core these days, and so the Immediate Mode rendering I was doing is now not good enough, so we turn to VAOs and VBOs.

I'm familiar with the idea of how all this works, but not the implementation. I've had a few forays into looking this up, and I've got this far thanks to rastertek (once again), my computer graphics tutor, and the SuperBible and Beginning Game... 2 with.OpenGL.

Enough history! Time for some code, because it's easier to see how it worked. In fact.. I should proably take this time to apologize for posting such a lot of code. Sorry, dude.

This is how I render my client/cpu-side -stored arrays with immediate mode, atm:

[CODE]


//------------------elsewhere-----------------------------------------
struct Vertex3D { float x, y, z; }; typedef Vertex3D Normal3D; struct UV { float u, v;};

class Mesh { //we could assume mesh/model looks like this for clarity
Vertex3D * vertices;
Normal3D * normals;
UV * uvs;

/* for VBO, VAO functionality */
GLuint meshVAO;
GLuint meshVertexBuffer;
GLuint meshNormalBuffer;
GLuint meshTexCoordBuffer;
GLuint meshIndexBuffer;
GLuint meshColB; //? Unused


} //etc
//----------------------------------------------------------
glBegin(GL_TRIANGLES);
for ( int i = 0; i < mesh.noTriangles; i++ ) { //for each triangle in the list
Vector3D vpos; Normal3D npos;
for( int j = 0; j < 3; j++ ) {
// this vertex is the said triangle, verts attached
vpos.x = mesh.vertices[ mesh.triangleList[i].Vertex[j] ].x;
vpos.y = mesh.vertices[ mesh.triangleList[i].Vertex[j] ].y;
vpos.z = mesh.vertices[ mesh.triangleList[i].Vertex[j] ].z;
// this vertex is the said triangle, normals attached
npos.x = mesh.normals[ mesh.triangleList[i].Normal[j] ].x;
npos.y = mesh.normals[ mesh.triangleList[i].Normal[j] ].y;
npos.z = mesh.normals[ mesh.triangleList[i].Normal[j] ].z;

//PS SOMETIMES TEX COORDS TOO
glNormal3f(npos.x, npos.y, npos.z);
glVertex3f(vpos.x, vpos.y, vpos.z);
} //j++
} // i++
glEnd();

[/CODE]

And Now I'd like to do it with VBOs and a VAO per model.. So wheres the error in my setup thats causing my model not to render properly?

I've loaded a model into memory, I'm not going to edit it, but I'd like my RAM back.. and so STATIC_DRAW is k for now..

[CODE]
void GLModel::LoadBuffers() {
//if (glGenVertexArrays)
// cout << "okay" << endl;
glGenVertexArrays(1, &meshVAO); //Allocate an OpenGL VAO
glBindVertexArray(meshVAO); //Bind the VAO to store all the buffers and attributes we create here

//position data
glGenBuffers(1, &meshVertexBuffer); //Generate an ID for the Vertex Buffer
glBindBuffer(GL_ARRAY_BUFFER, meshVertexBuffer); //Bind the Vertex Buffer and load the vertex (position, normal, texcoord data) into the Vertex Buffer
glBufferData(GL_ARRAY_BUFFER, sizeof( Vertex3D ) * mesh.noVerts, mesh.vertices, GL_STATIC_DRAW);
glVertexAttribPointer( 0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex3D), (const GLvoid * ) 0 ); //attribute 0 gets vertex position data
/*
(const GLvoid * ) 0 refers to a pointer.. a pointer to where? or size of pointer? an offset?
*/
// normal data
glGenBuffers(1, &meshNormalBuffer);
glBindBuffer(GL_ARRAY_BUFFER, meshNormalBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof( Normal3D ) * mesh.noNormals, mesh.normals, GL_STATIC_DRAW);
glVertexAttribPointer( 1, 3, GL_FLOAT, GL_FALSE, sizeof(Normal3D), (const GLvoid * ) 0 ); //attribute 1 gets normal data
// texcoord data
glGenBuffers(1, &meshTexCoordBuffer);
glBindBuffer(GL_ARRAY_BUFFER, meshTexCoordBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof( UV ) * mesh.noTexCoords, mesh.texcoords, GL_STATIC_DRAW);
glVertexAttribPointer( 2, 2, GL_FLOAT, GL_FALSE, sizeof(UV), (const GLvoid * ) 0 ); //attribute 2 gets UV data
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);

// Faces / Indices
glGenBuffers(1, &meshIndexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, meshIndexBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof( Triangle ) * mesh.noTriangles, mesh.triangleList, GL_STATIC_DRAW);

glBindVertexArray(0); //done binding to meshVAO
}
[/CODE]

Share this post


Link to post
Share on other sites
Advertisement
try moving the glEnableAttribArray() to before you bind another vbo, and possibly enable that index for ONLYthat vbo =)
i think that's what's happening here

Share this post


Link to post
Share on other sites
[quote name='Kaptein' timestamp='1351018443' post='4993165']
try moving the glEnableAttribArray() to before you bind another vbo, and possibly enable that index for ONLYthat vbo =)
i think that's what's happening here
[/quote]

I will let you know in the 'morrow friend! Thanks for your advice!

Share this post


Link to post
Share on other sites
Now this may help [url="http://www.arcsynthesis.org/gltut/"]Learning Modern 3D Graphics Programming[/url] (OpenGL 3.3).

Aaand this may help too [url="http://www.opengl.org/sdk/docs/man/xhtml/glVertexAttribPointer.xml"]glVertexAttribPointer[/url] (spoiler: last parameter is an offset).

All OpenGL functions are documented in OpenGL.org so as long you have internet, you can know more or less what each one does :D

Share this post


Link to post
Share on other sites
Hi Guys.

I tried moving the glVertexAttribArrayPointer( n ) to after the buffer creation for each buffer (vertices, normals, texcoords etc), but the effect is the same.

It's been annoying me not knowing what the last pointer for glVertexAttribArray was, and you know, I wondered if it was an offset! Thing is there is no offset - each buffer is created from a seperate array..? Thanks though TheChubu, I'll be using that resource often methinks! Also, I think I've come across that 1st link before - also good, so thank you!

[quote name='Kaptein' timestamp='1351018443' post='4993165']
and possibly enable that index for ONLYthat vbo =)
[/quote]

..can you nudge me a bit harder please? Thanks!

I'm wondering if it's my render-time code? The set up at init time seeeeeems to be fine. Heres the render code (by the by, I'm not sure all of this is necessary):
[CODE]
void GLModel::RenderElements() {
glPushMatrix();
glTranslatef(pos.x, pos.y, pos.z);
glColor3f( 0.0f, 1.0f , 0.0f );
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glDisable(GL_CULL_FACE);
glBindVertexArray( meshVAO );
glDrawElements(GL_TRIANGLES, mesh.noTriangles * 9, GL_UNSIGNED_INT, (const GLvoid*) 0);
glEnable(GL_CULL_FACE);
glPolygonMode(GL_FRONT, GL_FILL);
glPopMatrix();
}
[/CODE]

Thanks again guys!

Share this post


Link to post
Share on other sites
It doesn't look like you're using shaders to draw, but are trying to use generic vertex attributes (with glVertexAttribPointer). the first argument to glVertexAttribPointer is supposed to be an attribute index in the shader program you plan to draw with, that you either bound before linking said program or retrieved via glGetAttribLocation after. I believe that on some hardware the way you're doing it (by mapping position to 0, normal to 1, and tex coords to 2) might work, but I wouldn't trust it.

If you just want to get what you have now working without shaders, try using glVertexPointer, glNormalPointer, and glTexCoordPointer in lieu of glVertexAttribPointer, and replace glEnableVertexAttribArray with glEnableClientState (call it 3 times, passing one of GL_NORMAL_ARRAY, GL_TEXTURE_COORD_ARRAY, GL_VERTEX_ARRAY each time)

Not to nitpick, but 3.3 core has deprecated the matrix stack as well. If you want to bring your code completely "up to date", I would recommend handling matrices yourself or with a library, making use of shaders for drawing geometry, and also using generic attribute buffers (glVertexAttribPointer and the like). Edited by Koehler

Share this post


Link to post
Share on other sites
you can take it step by step, but as the above person is telling you, start using shaders immediately
then we'll help you with everything else once you know for certain that your shaders are loaded and linked properly!

if you are doing everything manually, as it looks like you're doing, then you should be able to find small functions to load and link shaders easily enough
check and double check that the shaders are loaded and without errors, then call 911-GAMEDEV again :)

Share this post


Link to post
Share on other sites
The problem was indeed indexing, I've now got it working beautifully:

thing is struct Triangle held 9 ints, not 3.. (immediate mode relic - now all gone).
So I've replaced with a struct Face that holds only vertices indices, which is the index used for all three arrays on the GPU - tex coords, normals, and verts.

One nested for loop later before I bind any buffers and I'm good to go.

[CODE]glBindVertexArray( meshVAO );
glDrawElements(GL_TRIANGLES, mesh.noTriangles * 3, GL_UNSIGNED_INT, (const GLvoid*) 0);[/CODE]

And next I'm doing shaders - hence the setup above - this will be a fully textured, shaded, properly lit object in a few weeks..
Come March it will be .3DS, animated, instead of .obj too.. Wish me luck!

[quote name='Kaptein' timestamp='1351107389' post='4993519']
then you should be able to find small functions to load and link shaders easily enough
check and double check that the shaders are loaded and without errors, then call 911-GAMEDEV again
[/quote]

Indeed - a wrapper for the shaders is next on the list. Ahh shaders.. how I love thee!

..Also a few more preliminaries like a decent camera class, a wrapper for lights, implement DirectInput, etc etc

Share this post


Link to post
Share on other sites
My tutor has read this thread, and he agrees with Koehler's consensus. As do I, it just took a while to sink in. Apologies guys!

It's a bit messy - this whole project - because I'm trying to rewrite something that was written for FFP and redirect it for 3.3 core, and perhaps some GLES (but forget I even mentioned that for now).

So my draw method should look like this, given that m_Program is a compiled, linked, shader program..?

[CODE]
void GLModel::RenderElements() {
glUseProgram( m_Program );
glTranslatef(pos.x, pos.y, pos.z);
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glDisable(GL_CULL_FACE);

glBindVertexArray( meshVAO );
glDrawElements(GL_TRIANGLES, mesh.noTriangles * 3, GL_UNSIGNED_INT, (const GLvoid*) 0);

glEnable(GL_CULL_FACE);
glPolygonMode(GL_FRONT, GL_FILL);
glUseProgram( 0 );
}
[/CODE]

I'll obviously have to set some uniforms somewhere? _ I can figure that out I'm sure.

Thanks again guys. :)

[quote name='Koehler' timestamp='1351092048' post='4993455']
the first argument to glVertexAttribPointer is supposed to be an attribute index in the shader program you plan to draw with, that you either bound before linking said program or retrieved via glGetAttribLocation after. I believe that on some hardware the way you're doing it (by mapping position to 0, normal to 1, and tex coords to 2) might work, but I wouldn't trust it.

If you just want to get what you have now working without shaders, try using glVertexPointer, glNormalPointer, and glTexCoordPointer in lieu of glVertexAttribPointer, and replace glEnableVertexAttribArray with glEnableClientState (call it 3 times, passing one of GL_NORMAL_ARRAY, GL_TEXTURE_COORD_ARRAY, GL_VERTEX_ARRAY each time)

Not to nitpick, but 3.3 core has deprecated the matrix stack as well. If you want to bring your code completely "up to date", I would recommend handling matrices yourself or with a library, making use of shaders for drawing geometry, and also using generic attribute buffers (glVertexAttribPointer and the like).
[/quote]

Shaders please!
Okay..
So the first attribute of glVertexAttribPointer( 0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex3D), (const GLvoid * ) 0 );
refers to, in the shader file, somethin like "layout vec3 (location = 0) in VxPos;" where location is the first paramater of glVertexAttribPointer?

Deprecated Matrix Stack - indeed!
This has been bugging me no end. Is something like float some_matrix[16]; really the best way to represent a matrix? Would something like some_matrix[4][4] be more or less efficient? I imagine the code would be a bit nicer..?

Generic Attribute Buffers?

For what its worth, the nitpicking is good, dude! Always, always helpful - I'm learnig a lot here! :)

Share this post


Link to post
Share on other sites
[quote name='mynameisnafe' timestamp='1351252862' post='4994112']
So my draw method should look like this, given that m_Program is a compiled, linked, shader program..?
(snip)
[/quote]

That looks correct! Just remember that if you're drawing later on with glPolygonMode(GL_FRONT,GL_FILL), you might as well re-enable GL_CULL_FACE. It should not affect your visual results and should improve performance. When you're ready to add uniforms, you'll pass them in your GLModel::RenderElements() function.

In C++, the choice between matrix[16] and matrix[4][4] comes down to which you like better. matrix[1][1] in the second is the same offset as matrix[4] in the first. The important thing to remember for openGL is that it is column-major in its memory layout (so array element 0 (or [0][0]) is row 0, column 0, but element 1 (or [0][1]) is row 1, column 0. Fortunately, if you want to access your 2D array as matrix[row][column], you can tell OpenGL to transpose it when you pass it in as a uniform (third argument to any of the glUniformMatrix_fv functions).

That said, it's probably easiest to work with a Matrix class so that you can do things like this:
[CODE]
mat4x4 viewProjectionMatrix = projectionMatrix*viewMatrix;
[/CODE].
note that in shader code vectors and matrices are pre-defined types, so code like the above would work.

You may notice that I've referred to "ViewProjection" matrix above, whereas OpenGL has stacks for the "ModelView", and "Projection" matrices. As you move away from using the matrix stack, the easiest way to think of this is as three separate matrices, Model ,View, and Projection. Under the hood, it's effectively doing this:
[CODE]
vec4 finalVertexPos = Projection*View*Model*vertexPos;
[/CODE]

For convenience's sake, since the camera doesn't change over the course of a single frame, I like to combine the view and projection matrices together just once, and use the result like this:
[CODE]
vec4 finalVertexPos = viewProjection*Model*vertexPosition;
[/CODE]

I'm a bit short on time to explain how to generate view and projection matrices. For a temporary solution you can use glGetFloatv with GL_MODELVIEW_MATRIX or GL_PERSPECTIVE_MATRIX to extract the ones you're already making with the matrix stack (exclude your last glTranslate if you want just your view matrix)

[b]Warning:[/b] glGet calls are comparatively slow and you are far better off generating your own, but for comparison's sake this will give you matrices that you can start with.

One step at a time is best.
First focus on getting a shader to compile, using view/projection matrices you copied out of the stack with glGetFloatv, and replacing your glTranslate with a glUniform3f() to move your object around. This will give you a working set of values to verify your shader functions with.
Once that's done, start building (or just download one that already exists) a matrix math library, and work on building full model transforms (translation*rotation*scale) as well as your own view/projection calculations.

[quote name='mynameisnafe' timestamp='1351252862' post='4994112']
Generic Attribute Buffers?
[/quote]
That was me not properly referring to VAO's. You've got them already, so this is done :)

Share this post


Link to post
Share on other sites
Don't worry I've briefly seen some code that demonstrates how to build matrices and with all the maths we did last year I have no excuse not to have it covered :)

[quote name='Koehler' timestamp='1351266019' post='4994175']
One step at a time is best.
[/quote]

Indeed! ..Replace glTranslate() with glUniform3f() - Done

[quote name='Koehler' timestamp='1351266019' post='4994175']
In C++, the choice between matrix[16] and matrix[4][4] comes down to which you like better. matrix[1][1] in the second is the same offset as matrix[4] in the first. The important thing to remember for openGL is that it is column-major in its memory layout (so array element 0 (or [0][0]) is row 0, column 0, but element 1 (or [0][1]) is row 1, column 0.
[/quote]

Sounds complicated, but something I'm eager to tackle - can't not really! This is port of call number one methinks as I have no replacement for the deprecated stack, and the one we use at university belongs to my tutor, so can't really use that.

[quote name='Koehler' timestamp='1351266019' post='4994175']
using view/projection matrices you copied out of the stack with glGetFloatv
[/quote]

... I'll enquire my tutor and some sample codes about this - possibly need those matrices first?

[quote name='Koehler' timestamp='1351266019' post='4994175']
work on building full model transforms (translation*rotation*scale) as well as your own view/projection calculations
[/quote]

I can do all these calculations in the vertex shader methinks, I just need to get the matrices in as uniforms :)

Basically, I need to figure out how to get the uniforms I need and how to pass them to the shader ( pretty sure I know this/have done this before ), make the shader class a bit more static so I can say "Give shader" in the GLModels Init, and then test!

I'll come back in a few days and let you know how far I've gotten, I've got a few beneficial labs in the weel that will be helpful with this.

Thank you my friend! :)

Share this post


Link to post
Share on other sites
Okay, I have the shaders working :) - I can see a white aeroplane, with the colour just hardcoded into the shader for now.

I'm also in the process of replacing anything that can be replaced with GLM, with GLM, which is how I'm getting uniform matrices into the shader.. ( Anything on the runtime side of loading the model that is - so animation / transformations and such )

The problem I'm having at the mo is finding a decent camera tutorial
- the camera I have at the moment is locked to look at the model while zooming in and out and moving around it, and it makes use of Fixed Function bits and bobs..

Does anybody have a favourite (GLM ?) camera tutorial..?

A quick final question.. what is the difference between these two frustum calls? They produce different results..

[CODE]
GLdouble fW, fH;
fH = tan( (fovY / 2) / (180 * gl_PI) ) * zNear;
fH = tan( fovY / 360 * gl_PI ) * zNear;
fW = fH * aspect;

//glm::frustum(-fW, fW, -fH, fH, zNear, zFar);
glFrustum( -fW, fW, -fH, fH, zNear, zFar ); // <- deprecated
[/CODE]

Share this post


Link to post
Share on other sites
What i posted before was wrong, so here is something right:
http://gamedev.stackexchange.com/questions/12726/opengl-es-2-0-understanding-perspective-projection-matrix
Edited by Kaptein

Share this post


Link to post
Share on other sites
[quote name='Kaptein' timestamp='1352098546' post='4997464']
i think you need 2 glfrustum calls (one for each Z-plane), but i could be wrong
[/quote]
That is completely wrong.

[quote name='Kaptein' timestamp='1352098546' post='4997464']
glm::frustum probably creates a perspective frustum directly for you with both planes
it could also create 6 normalized planes internally that you could do frustum culling with, such as bool glm::pointInFrustum(x, y, z)
(but i dont know the specifics)
[/quote]
I have no real clue what you are talking about.

All glFrustum did was creating a projection matrix that maps a capped world-space pyramid into [-1,+1]^3 and then implicitly calling glMultMatrix with that matrix (see [url=http://www.khronos.org/opengles/documentation/opengles1_0/html/glFrustum.html]the documentation[/url]). The reason glFrustum is deprecated is because modern OpenGL is no longer responsible for having a static projection/modelview matrix and applying it - you are now responsible for generating the matrices and sending them to your shaders as needed.
glm::frustum is identical to glFrustum as far as the creation of the matrix goes. Instead of multiplying it onto a global state that no longer exists, it returns that matrix as a glm::mat4 though so you can send it to your shaders.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Now

  • Advertisement
  • Similar Content

    • By khawk
      We've just released all of the source code for the NeHe OpenGL lessons on our Github page at https://github.com/gamedev-net/nehe-opengl. code - 43 total platforms, configurations, and languages are included.
      Now operated by GameDev.net, NeHe is located at http://nehe.gamedev.net where it has been a valuable resource for developers wanting to learn OpenGL and graphics programming.

      View full story
    • By TheChubu
      The Khronos™ Group, an open consortium of leading hardware and software companies, announces from the SIGGRAPH 2017 Conference the immediate public availability of the OpenGL® 4.6 specification. OpenGL 4.6 integrates the functionality of numerous ARB and EXT extensions created by Khronos members AMD, Intel, and NVIDIA into core, including the capability to ingest SPIR-V™ shaders.
      SPIR-V is a Khronos-defined standard intermediate language for parallel compute and graphics, which enables content creators to simplify their shader authoring and management pipelines while providing significant source shading language flexibility. OpenGL 4.6 adds support for ingesting SPIR-V shaders to the core specification, guaranteeing that SPIR-V shaders will be widely supported by OpenGL implementations.
      OpenGL 4.6 adds the functionality of these ARB extensions to OpenGL’s core specification:
      GL_ARB_gl_spirv and GL_ARB_spirv_extensions to standardize SPIR-V support for OpenGL GL_ARB_indirect_parameters and GL_ARB_shader_draw_parameters for reducing the CPU overhead associated with rendering batches of geometry GL_ARB_pipeline_statistics_query and GL_ARB_transform_feedback_overflow_querystandardize OpenGL support for features available in Direct3D GL_ARB_texture_filter_anisotropic (based on GL_EXT_texture_filter_anisotropic) brings previously IP encumbered functionality into OpenGL to improve the visual quality of textured scenes GL_ARB_polygon_offset_clamp (based on GL_EXT_polygon_offset_clamp) suppresses a common visual artifact known as a “light leak” associated with rendering shadows GL_ARB_shader_atomic_counter_ops and GL_ARB_shader_group_vote add shader intrinsics supported by all desktop vendors to improve functionality and performance GL_KHR_no_error reduces driver overhead by allowing the application to indicate that it expects error-free operation so errors need not be generated In addition to the above features being added to OpenGL 4.6, the following are being released as extensions:
      GL_KHR_parallel_shader_compile allows applications to launch multiple shader compile threads to improve shader compile throughput WGL_ARB_create_context_no_error and GXL_ARB_create_context_no_error allow no error contexts to be created with WGL or GLX that support the GL_KHR_no_error extension “I’m proud to announce OpenGL 4.6 as the most feature-rich version of OpenGL yet. We've brought together the most popular, widely-supported extensions into a new core specification to give OpenGL developers and end users an improved baseline feature set. This includes resolving previous intellectual property roadblocks to bringing anisotropic texture filtering and polygon offset clamping into the core specification to enable widespread implementation and usage,” said Piers Daniell, chair of the OpenGL Working Group at Khronos. “The OpenGL working group will continue to respond to market needs and work with GPU vendors to ensure OpenGL remains a viable and evolving graphics API for all its customers and users across many vital industries.“
      The OpenGL 4.6 specification can be found at https://khronos.org/registry/OpenGL/index_gl.php. The GLSL to SPIR-V compiler glslang has been updated with GLSL 4.60 support, and can be found at https://github.com/KhronosGroup/glslang.
      Sophisticated graphics applications will also benefit from a set of newly released extensions for both OpenGL and OpenGL ES to enable interoperability with Vulkan and Direct3D. These extensions are named:
      GL_EXT_memory_object GL_EXT_memory_object_fd GL_EXT_memory_object_win32 GL_EXT_semaphore GL_EXT_semaphore_fd GL_EXT_semaphore_win32 GL_EXT_win32_keyed_mutex They can be found at: https://khronos.org/registry/OpenGL/index_gl.php
      Industry Support for OpenGL 4.6
      “With OpenGL 4.6 our customers have an improved set of core features available on our full range of OpenGL 4.x capable GPUs. These features provide improved rendering quality, performance and functionality. As the graphics industry’s most popular API, we fully support OpenGL and will continue to work closely with the Khronos Group on the development of new OpenGL specifications and extensions for our customers. NVIDIA has released beta OpenGL 4.6 drivers today at https://developer.nvidia.com/opengl-driver so developers can use these new features right away,” said Bob Pette, vice president, Professional Graphics at NVIDIA.
      "OpenGL 4.6 will be the first OpenGL release where conformant open source implementations based on the Mesa project will be deliverable in a reasonable timeframe after release. The open sourcing of the OpenGL conformance test suite and ongoing work between Khronos and X.org will also allow for non-vendor led open source implementations to achieve conformance in the near future," said David Airlie, senior principal engineer at Red Hat, and developer on Mesa/X.org projects.

      View full story
    • By _OskaR
      Hi,
      I have an OpenGL application but without possibility to wite own shaders.
      I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
  • Advertisement