# OpenGL allowing for instancing and lod

## Recommended Posts

1. A video memory manager I am thinking of managing video memory as arbitrary data: let a geometry chunk upload any data it wants to because it will be responsible for making sure the vertex shader knows how to interpret it. It seems like Direct3D doesn't care what is in the buffers as long as you know how to interpret the data in your shaders. But video memory is very different than any memory manager (I've) ever written because:
• It's okay to overwrite some memory, because the memory is backed up (or at least easily accessible) in system memory. (I wish I knew something about caches.)
• It has to be very fast, or at least understand video memory access semantics because memory accesses are much slower than system memory. (The lock/unlock you have to do to let your program access the memory. And the fact that slow accesses mess up the CPU-GPU parallelization that is so important to framerate).
• The amount of free memory can't (and shouldn't be) accurately determined. You can't predict when a lost device will invalidate all memory, or when some memory will magically free itself up for your use.
• Also, on older cards (older shader versions?) you can't render offsets into a vertex buffer. I'm not exactly sure how this will affect my manager.
I have not yet tried my hand at writing such a memory manager, but I would like to hear what people think. Are there points about video memory that I'm missing? Is there a much better way to deal with video memory?
2. Level of Detail For pre-computed LOD meshes, you can give the geometry chunk some information about the level of detail needed, and it should return the appropriate mesh data. What I have some qusetions about is LOD at runtime; are LOD schemes feasible with vertex shaders? I don't know enough about shaders or about current LOD algorithms to judge whether my current system works with them.
3. Instancing Say I had certain complex teapot model in my teapot warehouse scene. When rendering, I want my engine to be able to identify that all the teapots are the same (well, teapots with identical materials and geometry chunks anyway) and somehow put them together to use with instancing. I think that after some particular sorting (first by render states, then by geometry chunks), multiple instances of an object will appear together in the render queue. They can be identified, and when they are to be rendered the engine can take their per-instance data and perform stream instancing (at least). Is this the wrong way to use instancing? Should I have to explicitly mark a model as being rendered several times in a scene for my engine to use instancing?
Thanks for taking the time to think about my questions, I hope they inspire you to ask some questions of your own

## Create an account

Register a new account

• ## Partner Spotlight

• ### Forum Statistics

• Total Topics
627662
• Total Posts
2978519
• ### Similar Content

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.

References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:

• By Defend
I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
* make lots of VAO/VBO pairs and flip through them to render different objects, or
* make one big VBO and jump around its memory to render different objects.
I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?

• 10
• 10
• 12
• 22
• 13