Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 18 Apr 2013
Offline Last Active Dec 16 2014 06:23 AM

Posts I've Made

In Topic: cost of rendering 'modular' entity models

19 October 2014 - 07:52 PM

Taking your last idea: just generate one vbo on the fly with the limbs your player has chosen, rather than having 9.699999 million unused vbo's.

In Topic: glGenFramebuffers access violation in gl 3 . Deprecated?

11 May 2014 - 03:48 PM

No, they didn't deprecate the creation of framebuffers in the version it was added :L you're probably trying to write to a read-only framebuffer or not loading the correct opengl version in your program.

In Topic: Issue with GLX/X11 on Ubuntu not showing correct Window Contents

30 April 2014 - 04:42 AM

Have you tried clearing it with glClear(GL_COLOR_BUFFER_BIT) then swapping the buffers? Sounds like you're just looking at a garbage front buffer

In Topic: Custom view matrices reduces FPS phenomenally

19 April 2014 - 05:40 PM

whereas the GPU will need to do it per vertex.

If your graphics drivers are any good, it will only do the multiplication once. If your driver can't perform this optimization, switch gpu vendor.

In Topic: Custom view matrices reduces FPS phenomenally

19 April 2014 - 05:13 PM

So each time I translate() an object, or manipulate any of these matrices, all 5 are uploaded to my Uniform Buffer Object on the GPU.

You can't be doing this, its too expensive.

If you modify a model matrix, don't update your projection matrix just for shits and giggles, that's inefficient and a huge performance drop. Imagine how many times per second you are doing that!

Instead, figure out the offset of each matrix into the buffer and store those indices then whenever you NEED to do an operation, map the buffer and modify the matrix based on one of those indices.

Also, instead of working out the model view proj matrix on the CPU, do the multiplication in your shader. GPUs are far better at matrix multiplication in almost any situation.