Jump to content
  • Advertisement
Sign in to follow this  
kamimail

OpenGL QuestionS about the large-scale data rendering

This topic is 2596 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,I am a novice in the real- time rendering,I have a question about the large-scale data rendering.

How to handle the situation where the rendering data(geometry + texture) in a frame is larger than memory in GPU? If I can partition the entire data into a series of smaller packages and commit the packages into pipeline one by one by using only specific buffer in GPU, the problem may be resolved. However, this approach lead to another problem: How can I guarantee that one package in the buffer has been rendered(has passed the pipeline) so that current contents of the buffer can be discarded and be updated with next package ? Is there any OpenGL command available for this purpose or is there other solutions which can solve my question?

Thanks!

Share this post


Link to post
Share on other sites
Advertisement
This isn't something you generally have to worry about. The OpenGL driver manages the gpu memory, it will swap things in and out of system memory as needed. If you create more geometry/textures than will fit in your gpu's memory, then OpenGL just stores it in system memory until it is needed.

Share this post


Link to post
Share on other sites
How can I guarantee that one package in the buffer has been rendered(has passed the pipeline) so that current contents of the buffer can be discarded and be updated with next package?
If you issue commands that (1) create a resource, (2) use that resource and (3) destroy the resource, then the GPU driver is smart enough to handle it.
When you issue command #(3), if command #(2) hasn't been processed by the GPU yet, then the driver will stall and wait for #(2) to complete before it actually executes #(3).
Nothing to worry about.
How to handle the situation where the rendering data(geometry + texture) in a frame is larger than memory in GPU?

I can partition the entire data into a series of smaller packages and commit the packages into pipeline one by one...[/quote]However, this is going to be a very slow approach - okay for offline rendering, but not very good for real-time rendering.
Instead you should compress your data / discard detail until it does fit into the GPU memory. For example, terrain LOD algorithms save you from having to store a fully-detailed terrain mesh/geometry in memory, or Sparse Virtual Textures (MegaTextures) allow you to render from arbitrary sized textures (e.g. 65k*65k pixels) with only a few MB of GPU RAM via paging.

Share this post


Link to post
Share on other sites
Hidden
karwost and Hodgmen ,Thanks!

I still have a question :
If I use the VBO to carry the vertices of a mesh , can I safely modify the sub-portion in the VBO just after calling the glDrawXXX functions?
according to the some pages in the web, it is legitimate to do that. but it is failed to produce the correct frame sometimes in my project .

Share this post


Link to post
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!