• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
TRONJon

OpenGL
Custom view matrices reduces FPS phenomenally

17 posts in this topic

So I've been upgrading my engine to use custom view matrices instead of the OpenGL gl_ModelView and gl_Projection which are deprecated in newer versions.

 

Now, as I'm using Shadow mapping, Skeletal Animation, and various other shader techniques, I've split my matrices into:

modelMatrix

viewMatrix

modelViewMatrix

projectionMatrix

modelViewProjectionMatrix

 

So each time I translate() an object, or manipulate any of these matrices, all 5 are uploaded to my Uniform Buffer Object on the GPU.

 

And my FPS has dropped from 1200 to 170, this is unacceptable for me considering all I've done is change the matrices behind the scene. Nothing has changed in the engine itself.

 

Can someone tell me what has caused the drop in performance? I'm guessing it's something along the lines of:

- My matrix operations in Java are slow

- Uploading 5 matrices regularly is using up my bandwidth?

0

Share this post


Link to post
Share on other sites

- My matrix operations in Java are slow

- Uploading 5 matrices regularly is using up my bandwidth?

 

Probably neither.  Unless you've got a really really bad matrix library, or an absolutely huge amount of matrices to upload, both of which are extreme and unlikely scenarios, you'll need to look elsewhere.

 

That UBO update - that's what I'd point my finger at.  There are threads about UBO performance and how slow they are to update, as well as the hoops you need to jump through in order to make them fast again.

 

Before we go any further this is worth testing and fortunately it's an extremely simple and minimally-intrustive test.  Just convert to standalone uniforms from a UBO.  See if things improve.  If they do then we've established that yes, it's the UBO that's causing your performance problems.

0

Share this post


Link to post
Share on other sites

So each time I translate() an object, or manipulate any of these matrices, all 5 are uploaded to my Uniform Buffer Object on the GPU.


You can't be doing this, its too expensive.

If you modify a model matrix, don't update your projection matrix just for shits and giggles, that's inefficient and a huge performance drop. Imagine how many times per second you are doing that!

Instead, figure out the offset of each matrix into the buffer and store those indices then whenever you NEED to do an operation, map the buffer and modify the matrix based on one of those indices.

Also, instead of working out the model view proj matrix on the CPU, do the multiplication in your shader. GPUs are far better at matrix multiplication in almost any situation.
-1

Share this post


Link to post
Share on other sites

Upload them only once just before rendering - though the driver will probably already do this for you behind the scenes.

 

 

The more important point you might want to consider is how you measure your performance. If you get >1000fps with those various effects, your scene is probably too small to test on. If your rendering is not the bottleneck, then your memory lanes are. So you could probably throw a way more complex scene at the program and it'd run at the same speed. Another point is that measuring with fps can be deluding - a drop from 1200fps to 170fps is not that massive, the render times went from 1ms to 6ms - you should maybe measure which parts take how much time, opengl has time query objects for this.

1

Share this post


Link to post
Share on other sites
Also, instead of working out the model view proj matrix on the CPU, do the multiplication in your shader. GPUs are far better at matrix multiplication in almost any situation.

 

...but the CPU typically only has to do this particular multiplication once, whereas the GPU will need to do it per vertex.  Yes, the GPU is faster, but tens of thousands of times per frame versus once?  It's not that much faster.

2

Share this post


Link to post
Share on other sites


If you modify a model matrix, don't update your projection matrix just for shits and giggles, that's inefficient and a huge performance drop. Imagine how many times per second you are doing that!
This.

 

You don't update the uniforms each time you modify a matrix.

 

First you do all your computations (rotations, translations, scaling, model view projection, whatever) for all your objects. Then, when you're about to draw the mesh, you update the uniforms for that object.

1

Share this post


Link to post
Share on other sites

Upload them only once just before rendering - though the driver will probably already do this for you behind the scenes.

 

 

The more important point you might want to consider is how you measure your performance. If you get >1000fps with those various effects, your scene is probably too small to test on. If your rendering is not the bottleneck, then your memory lanes are. So you could probably throw a way more complex scene at the program and it'd run at the same speed. Another point is that measuring with fps can be deluding - a drop from 1200fps to 170fps is not that massive, the render times went from 1ms to 6ms - you should maybe measure which parts take how much time, opengl has time query objects for this.

 

I'd counter-argue that going from 1ms to 6ms is extremely significant, particularly if all other factors are equal between the two tests.  You've just blown one-third of your frametime budget on ... nothing.  Yes, that's significant.

 

Now, if it was going from - say - 8ms to 13 ms, you'd have a point, particularly if there was a nice new effect, higher LOD, or whatever to look at in return for it.  Blowing one-third of your frametime budget just on account of using a different way of doing the same thing?  Nope, you don't have a point, sorry.

0

Share this post


Link to post
Share on other sites

whereas the GPU will need to do it per vertex.


If your graphics drivers are any good, it will only do the multiplication once. If your driver can't perform this optimization, switch gpu vendor.
-4

Share this post


Link to post
Share on other sites

whereas the GPU will need to do it per vertex.

If your graphics drivers are any good, it will only do the multiplication once. If your driver can't perform this optimization, switch gpu vendor.
I'd love to see proof of this. In my experience, if you ask the GPU to perform operations on uniforms per vertex/pixel, then the GPU will do so. The only "preshaders" that I've seen that are reliable are ones that modify your shader code ahead of time and generate x86 routines for patching your uniforms...
Anyway, even if this does work on 1 vendor, you're just playing into the hands of their marketing department by deliberately writing bad code that's going to (rightfully) run slow for two thirds of your users, and act as a marketing tool for one vendor :(
2

Share this post


Link to post
Share on other sites

 

whereas the GPU will need to do it per vertex.


If your graphics drivers are any good, it will only do the multiplication once. If your driver can't perform this optimization, switch gpu vendor.

 

 

Who's going to tell that to your users after you release Turbo Wombat IV and it sells 20 million copies, but runs slow for 10 million of them?  You?

0

Share this post


Link to post
Share on other sites

If you modify a model matrix, don't update your projection matrix just for shits and giggles, that's inefficient and a huge performance drop. Imagine how many times per second you are doing that!

 

When we are speaking of optimization, NV does not transfer values to uniforms if they are not changed. It is probably not the case for buffers. Of course, buffers are transfered to graphics card memory only before they are actually used. So, frequent change before drawing should not affect performance significantly. Especially because it is a small amount of data in case of uniform blocks and calls communicate only with drivers' memory space in main memory.

 

 

Also, instead of working out the model view proj matrix on the CPU, do the multiplication in your shader. GPUs are far better at matrix multiplication in almost any situation.

 

 

I have strongly to disagree with this statement. Model/view/projection matrix calculation is far better to be done on the CPU side. In case of scientific visualization, when precision is important, CPU (when say this I mean Intel, because I'm not familiar with AMD architecture) can generate 10 orders of magnitude more precise matrices than GPU. I don't even know how such huge number is called. :) Transformations cumulatively generates errors. If double precision is not used the transformation cannot be accurate enough. Further more, transcendental functions  are calculated only using single precision on the GPU. CUDA and similar APIs emulate double precision for such functions, but in OpenGL there is no transcendental functions emulations. I agree that hardware implemented transcendental functions are enormously fast. No CPU can compete with GPUs in that field. Just a single clock interval for a function call! Besides the fact that the number of SFU (as they are called) are not equal to SP units, pipeline usually hides the latency imposed by waiting for the SFU. But, as I already said, the high-level accuracy cannot be achieved.

1

Share this post


Link to post
Share on other sites

As far as I have understood it, Uniform Buffer Objects were created exactly for the need of bulk-updating multiple uniforms in a single call. Submitting a single UBO that contains mere five matrices should be a trivial workload. Refactoring that to multiple UBOs e.g. where one would contain model matrices and the other would contain projection matrices like was suggested above sounds like a heavy antioptimization - don't do that! (unless profiling suggests that two UBO uploads are faster than one in this case :o)

 

Or perhaps the discussion has confused the use of uniforms with a call to glUniformMatrix4fv without UBOs, and UBOs themselves. If you are not using UBOs and are manually updating uniform matrices with glUniformMatrix4fv, the there is benefit in optimizing to not redundantly change matrices that haven't changed.

 

Hodgman's suggestion is the sanest here:

   - Stop measuring FPS, but instead start measuring milliseconds. This will give better sense of the actual difference in workload.

   - Use a CPU profiler with the old code and the new code to compare where the extra added time is being spent. E.g. AMD CodeAnalyst is good (works on non-AMD CPUs as well). If it turns out to not be a CPU-side slowdown (the profiles are identical), then use e.g. nVidia Parallel Studio or AMD CodeXL to debug and profile the GPU side operation.

0

Share this post


Link to post
Share on other sites

I can pretty much guarantee where the slowdown is.  It's not in the matrix multiplication, it's not in binding UBOs to the pipeline.  The OP is doing a separate UBO update for each object drawn.  That's potentially tens, hundreds or thousands of UBO updates per frame.

 

The slowdown is in GL's buffer object API, because you just can't make this kind of high-frequency update and still maintain performance when using it.  Any profiling is just going to show a huge amount of time in the driver waiting for buffer object API calls to finish, waiting on CPU/GPU synchronization, and waiting on GL client/server synchronization.

 

The solution is to not use small UBOs and to not update per object.  Instead you create a single UBO large enough to hold all objects, figure out the data that needs updating ahead of time, do one single big UBO update per frame (preferably via glBufferSubData), then a bunch of glBindBufferRange calls per-object.  That runs fast, and in the absence of persistent mapping it's the only way to get performance out of UBOs.

Edited by mhagain
2

Share this post


Link to post
Share on other sites

Also, instead of working out the model view proj matrix on the CPU, do the multiplication in your shader.

Never perform matrix multiplication in a shader. All matrices that will be used in the shader should already be precomputed on the CPU.


L. Spiro
2

Share this post


Link to post
Share on other sites

 

Also, instead of working out the model view proj matrix on the CPU, do the multiplication in your shader.

Never perform matrix multiplication in a shader. All matrices that will be used in the shader should already be precomputed on the CPU.


L. Spiro

 

 

Except for skinning, but I agree with L. Spiro because you have to have in mind this mul will be done on each vertex or each pixel.

Edited by Alundra
0

Share this post


Link to post
Share on other sites
There are cases where it can be a good idea to keep view-projection and world matrices separate. Say you've got 10k static objects, if merging these transforms, the CPU has to perform 10k world*viewProj operations, and upload the 10k resultant matrices every frame. If kept separate, the CPU only has to upload the new viewProj matrix, and doesn't have to change any per-object data at all (but of course the GPU now has to do the 10k*numVerts matrix concatenations instead).
The "right" decision depends entirely on the game (and target hardware).
2

Share this post


Link to post
Share on other sites

 

upgrading my engine to use custom view matrices instead of the OpenGL gl_ModelView and gl_Projection which are

were you setting any other uniforms in the old deprecated scenario?

How many uniform writes do you do per frame? roughly (batch complexity)

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0

  • Similar Content

    • By Toastmastern
      So it's been a while since I took a break from my whole creating a planet in DX11. Last time around I got stuck on fixing a nice LOD.
      A week back or so I got help to find this:
      https://github.com/sp4cerat/Planet-LOD
      In general this is what I'm trying to recreate in DX11, he that made that planet LOD uses OpenGL but that is a minor issue and something I can solve. But I have a question regarding the code
      He gets the position using this row
      vec4d pos = b.var.vec4d["position"]; Which is then used further down when he sends the variable "center" into the drawing function:
      if (pos.len() < 1) pos.norm(); world::draw(vec3d(pos.x, pos.y, pos.z));  
      Inside the draw function this happens:
      draw_recursive(p3[0], p3[1], p3[2], center); Basically the 3 vertices of the triangle and the center of details that he sent as a parameter earlier: vec3d(pos.x, pos.y, pos.z)
      Now onto my real question, he does vec3d edge_center[3] = { (p1 + p2) / 2, (p2 + p3) / 2, (p3 + p1) / 2 }; to get the edge center of each edge, nothing weird there.
      But this is used later on with:
      vec3d d = center + edge_center[i]; edge_test[i] = d.len() > ratio_size; edge_test is then used to evaluate if there should be a triangle drawn or if it should be split up into 3 new triangles instead. Why is it working for him? shouldn't it be like center - edge_center or something like that? Why adding them togheter? I asume here that the center is the center of details for the LOD. the position of the camera if stood on the ground of the planet and not up int he air like it is now.

      Full code can be seen here:
      https://github.com/sp4cerat/Planet-LOD/blob/master/src.simple/Main.cpp
      If anyone would like to take a look and try to help me understand this code I would love this person. I'm running out of ideas on how to solve this in my own head, most likely twisted it one time to many up in my head
      Thanks in advance
      Toastmastern
       
       
    • By fllwr0491
      I googled around but are unable to find source code or details of implementation.
      What keywords should I search for this topic?
      Things I would like to know:
      A. How to ensure that partially covered pixels are rasterized?
         Apparently by expanding each triangle by 1 pixel or so, rasterization problem is almost solved.
         But it will result in an unindexable triangle list without tons of overlaps. Will it incur a large performance penalty?
      B. A-buffer like bitmask needs a read-modiry-write operation.
         How to ensure proper synchronizations in GLSL?
         GLSL seems to only allow int32 atomics on image.
      C. Is there some simple ways to estimate coverage on-the-fly?
         In case I am to draw 2D shapes onto an exisitng target:
         1. A multi-pass whatever-buffer seems overkill.
         2. Multisampling could cost a lot memory though all I need is better coverage.
            Besides, I have to blit twice, if draw target is not multisampled.
       
    • By mapra99
      Hello

      I am working on a recent project and I have been learning how to code in C# using OpenGL libraries for some graphics. I have achieved some quite interesting things using TAO Framework writing in Console Applications, creating a GLUT Window. But my problem now is that I need to incorporate the Graphics in a Windows Form so I can relate the objects that I render with some .NET Controls.

      To deal with this problem, I have seen in some forums that it's better to use OpenTK instead of TAO Framework, so I can use the glControl that OpenTK libraries offer. However, I haven't found complete articles, tutorials or source codes that help using the glControl or that may insert me into de OpenTK functions. Would somebody please share in this forum some links or files where I can find good documentation about this topic? Or may I use another library different of OpenTK?

      Thanks!
    • By Solid_Spy
      Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions.
      In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding).
      My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster.
      Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.
    • By KarimIO
      EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
      Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
      Update: No crash occurs if I don't draw, just clear and swap.
      static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));  
  • Popular Now