Sign in to follow this  

OpenGL OpenGL 3.1

Recommended Posts

I have been using vertex arrays (ClientSide), however now I am having
the problem that my CPU is not at full usage nor my GPU however the
game still lags. So I have decided to switch to VBO's and Shaders.
Maby the GPU-CPU bandwidth is causing the problem.
My target is to not use any deprecated functions.

I have been looking at the tutorials here:
[url=""]Tutorial Series[/url]

However it seems it is alot of work to create the matricies, especially
considering these will keep changing.

I know that Sin and Cos are really labor intensive functions,
these calculations below are performed for every vertex right?
Wouldn't this be very slow?

const mat3 projection = mat3( vec3(3.0/4.0, 0.0, 0.0), vec3( 0.0, 1.0, 0.0), vec3( 0.0, 0.0, 1.0) ); mat3 rotation = mat3( vec3(1.0, 0.0, 0.0), vec3(0.0, cos(timer), sin(timer)), vec3(0.0, -sin(timer), cos(timer)) ); mat3 scale = mat3( vec3(4.0/3.0, 0.0, 0.0), vec3( 0.0, 1.0, 0.0), vec3( 0.0, 0.0, 1.0) ); gl_Position = vec4(projection * rotation * scale *, 1.0); texcoord = position.xy * vec2(0.5) + vec2(0.5); fade_factor = sin(timer) * 0.5 + 0.5;

Here is a screenshot of a game I am going to try and change from OpenGL 1.1 to openGL 3.1:



Are VBO's faster then Vertex Arrays?

Are Vertex and Fragment Shaders faster than the Fixed Pipeline?

Are vertex shaders called for every vertex?

Is there a better way to do this than use sin() and cos()

Share this post

Link to post
Share on other sites
Yes, VBOs are faster than vertex arrays. Vertex shaders are called for every vertex, but the matrix is only computed once for the draw call, including sin+cos, so theyre not done per vertex.

Share this post

Link to post
Share on other sites
[quote name='zacaj' timestamp='1306717199' post='4817307']
Yes, VBOs are faster than vertex arrays. Vertex shaders are called for every vertex, but the matrix is only computed once for the draw call, including sin+cos, so theyre not done per vertex.

Unfortunately the tutorial doesn't seem to teach that this kind of thing is usually set once as a shader uniform in your application and not done for every single vertex in your shaders main(). Though chances are that GLSL compilers do what every good compiler would do: evaluate these expressions at compile time. Unless it turns out that shader complexity kills the performance, I'd worry more about making it work before making it faster.

Share this post

Link to post
Share on other sites
Thanks for the responces!

I have another question, again in the same tutorial they have specified the vertex positions and a seperate VBO for the list of vertex's.

Is it possible just to submit the vertex positions in order?


I am making a Voxel engine in which the cubes have textures (as seen in above pic)
Each vertex is actually used 3 times, because of how it works, so implementing this would help.
Basically I make the data for the GPU by looping threw the blocks and checking if the adjacent block in any direction doesn't exist.
If it doesnt exist it adds it to a vector "vertexBuffer"

How could I adapt this so It creates the vertex position and element lists?

I was thinking a 3d array but that would be a ton slower then it is now.
Then I thought a list, but it having to search if a vertex is already added would get exponentially slower.

Also I am going to add real lighting, so if a vertex is repeated each one will need it's own Normal.
Are these specified with the vertex's or the element array?

Again thanks a ton!


[b]Current method of Adding a face to buffer:[/b]
if (block[position[x-1][y][z]].hard==false)
iVertex3dQuad ver;
ver.x1 = x; ver.y1 = y; ver.z1 = z;
ver.x2 = x; ver.y2 = y; ver.z2 = z+1;
ver.x3 = x; ver.y3 = y+1; ver.z3 = z+1;
ver.x4 = x; ver.y4 = y+1; ver.z4 = z;

fVertex2dQuad tex;
tex.x1 = xMin; tex.y1 = yMin;
tex.x2 = xMin; tex.y2 = yMax;
tex.x3 = xMax; tex.y3 = yMax;
tex.x4 = xMax; tex.y4 = yMin;

Share this post

Link to post
Share on other sites
A vertex is not just a position, it's the whole package of all its relevant attributes, texture coord, normal, color and whatever else is passed as generic attribute these days. Trying to share vertices is rather pointless for cubes unless you can find other ways to make it completely unique and still work for all faces.

Share this post

Link to post
Share on other sites
Thanks, I'm glad I can ask on here that way I didn't put all that work into it to only find out it's not possible.

So now how do I use just the Vertex Lists and not element lists?



Does anyone know any good VBO tutorials?

(I'm targeting OpenGL 3.1 )

Share this post

Link to post
Share on other sites
I don't know about the best, but if you want the worst, here you go (the one at the bottom for your GL 3.0)

and also

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
    • Total Posts
  • Similar Content

    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By JJCDeveloper
      I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
    • By AyeRonTarpas
      A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

      -What I'm using:
          C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.  
      Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?  
    • By ferreiradaselva
      Both functions are available since 3.0, and I'm currently using `glMapBuffer()`, which works fine.
      But, I was wondering if anyone has experienced advantage in using `glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
      Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
  • Popular Now