Questions about large-scale data rendering

Started by
-1 comments, last by kamimail 12 years, 11 months ago
Hi,I am a novice in the real- time rendering,I have a question about the large-scale data rendering.

How to handle the situation where the rendering data(geometry + texture) in a frame is larger than memory in GPU? If I can partition the entire data into a series of smaller packages and commit the packages into pipeline one by one by using only specific buffer in GPU, the problem may be resolved. However, this approach lead to another problem: How can I guarantee that one package in the buffer has been rendered(has passed the pipeline) so that current contents of the buffer can be discarded and be updated with next package ? Is there any OpenGL command available for this purpose or is there other solutions which can solve my question?

Thanks!

This topic is closed to new replies.

Advertisement