Sign in to follow this  

Questions about large-scale data rendering

Recommended Posts

Hi,I am a novice in the real- time rendering,I have a question about the large-scale data rendering.

How to handle the situation where the rendering data(geometry + texture) in a frame is larger than memory in GPU? If I can partition the entire data into a series of smaller packages and commit the packages into pipeline one by one by using only specific buffer in GPU, the problem may be resolved. However, this approach lead to another problem: How can I guarantee that one package in the buffer has been rendered(has passed the pipeline) so that current contents of the buffer can be discarded and be updated with next package ? Is there any OpenGL command available for this purpose or is there other solutions which can solve my question?


Share this post

Link to post
Share on other sites
This topic is now closed to further replies.
Sign in to follow this