Sign in to follow this  
psykr

OpenGL allowing for instancing and lod

Recommended Posts

psykr    295
Please read a brief description about my engine design so far, and help me figure out how to (elegantly) add LOD and instancing! I just started learning about shaders (pixel and vertex shaders, just general concepts) and how to add them into my engine, so please correct me if I get anything horrendously wrong. I intend to support the fixed function pipeline as much as possible, so that may influence some of my design decisions. Also, I am working with C++ / Direct3D 9 not-for-Vista, but I hope to make my code clean enough to be able to port it to Linux / OpenGL in the near future. That is, I would prefer an API-independent way of doing things, but if I need the API I will use it. Engine Design The basic unit in my engine is called a geometry chunk. It supplies information about vertex layout (like a Direct3D vertex declaration) and also the actual vertex data. It's designed to be as general as possible, from loading raw data directly from file to procedurally generating vertex data when it needs to be uploaded to video memory. During each frame, I traverse the scene graph and try to cache objects that will probably be rendered. Then I would perform texture/shader/state sorting, followed by a second pass through all the geometry to actually render it. If data was not cached in the initial traversal, it has to be uploaded during the second pass before the geometry is rendered.
  1. A video memory manager I am thinking of managing video memory as arbitrary data: let a geometry chunk upload any data it wants to because it will be responsible for making sure the vertex shader knows how to interpret it. It seems like Direct3D doesn't care what is in the buffers as long as you know how to interpret the data in your shaders. But video memory is very different than any memory manager (I've) ever written because:
    • It's okay to overwrite some memory, because the memory is backed up (or at least easily accessible) in system memory. (I wish I knew something about caches.)
    • It has to be very fast, or at least understand video memory access semantics because memory accesses are much slower than system memory. (The lock/unlock you have to do to let your program access the memory. And the fact that slow accesses mess up the CPU-GPU parallelization that is so important to framerate).
    • The amount of free memory can't (and shouldn't be) accurately determined. You can't predict when a lost device will invalidate all memory, or when some memory will magically free itself up for your use.
    • Also, on older cards (older shader versions?) you can't render offsets into a vertex buffer. I'm not exactly sure how this will affect my manager.
    I have not yet tried my hand at writing such a memory manager, but I would like to hear what people think. Are there points about video memory that I'm missing? Is there a much better way to deal with video memory?
  2. Level of Detail For pre-computed LOD meshes, you can give the geometry chunk some information about the level of detail needed, and it should return the appropriate mesh data. What I have some qusetions about is LOD at runtime; are LOD schemes feasible with vertex shaders? I don't know enough about shaders or about current LOD algorithms to judge whether my current system works with them.
  3. Instancing Say I had certain complex teapot model in my teapot warehouse scene. When rendering, I want my engine to be able to identify that all the teapots are the same (well, teapots with identical materials and geometry chunks anyway) and somehow put them together to use with instancing. I think that after some particular sorting (first by render states, then by geometry chunks), multiple instances of an object will appear together in the render queue. They can be identified, and when they are to be rendered the engine can take their per-instance data and perform stream instancing (at least). Is this the wrong way to use instancing? Should I have to explicitly mark a model as being rendered several times in a scene for my engine to use instancing?
Thanks for taking the time to think about my questions, I hope they inspire you to ask some questions of your own

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By povilaslt2
      Hello. I'm Programmer who is in search of 2D game project who preferably uses OpenGL and C++. You can see my projects in GitHub. Project genre doesn't matter (except MMO's :D).
    • By ZeldaFan555
      Hello, My name is Matt. I am a programmer. I mostly use Java, but can use C++ and various other languages. I'm looking for someone to partner up with for random projects, preferably using OpenGL, though I'd be open to just about anything. If you're interested you can contact me on Skype or on here, thank you!
      Skype: Mangodoor408
    • By tyhender
      Hello, my name is Mark. I'm hobby programmer. 
      So recently,I thought that it's good idea to find people to create a full 3D engine. I'm looking for people experienced in scripting 3D shaders and implementing physics into engine(game)(we are going to use the React physics engine). 
      And,ye,no money =D I'm just looking for hobbyists that will be proud of their work. If engine(or game) will have financial succes,well,then maybe =D
      Sorry for late replies.
      I mostly give more information when people PM me,but this post is REALLY short,even for me =D
      So here's few more points:
      Engine will use openGL and SDL for graphics. It will use React3D physics library for physics simulation. Engine(most probably,atleast for the first part) won't have graphical fron-end,it will be a framework . I think final engine should be enough to set up an FPS in a couple of minutes. A bit about my self:
      I've been programming for 7 years total. I learned very slowly it as "secondary interesting thing" for like 3 years, but then began to script more seriously.  My primary language is C++,which we are going to use for the engine. Yes,I did 3D graphics with physics simulation before. No, my portfolio isn't very impressive. I'm working on that No,I wasn't employed officially. If anybody need to know more PM me. 
       
    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
  • Popular Now