Jump to content

  • Log In with Google      Sign In   
  • Create Account


Accessing info in a vertex buffer object from outside the video memory


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
6 replies to this topic

#1 Lil_Lloyd   Members   -  Reputation: 287

Like
0Likes
Like

Posted 17 January 2013 - 08:39 PM

Hello again true believers. 

 

I have possibly a very simple question. When I generate vertices for a heightmap, for example, then copy the vertices to the video memory using glBufferData() I usually delete the copy of the original vertex data afterwards, e.g

 

//psuedo code!

generate vertex data

copy to video memory using glBufferData

delete [] vertex data

 

However, if I wanted to access specific locations in the video memory buffer to access a vertex on the host/cpu side, is this possible? Or will I need to keep the vertex data around in host memory on the cpu side? Of course this brings up questions regarding performance - there will be a long delay reading from video memory or is this around the same delay as a normal cpu cache miss anyway?

 

This question extends to accessing normal data, texture coord data or anything else you feel fit to store on the video memory.



Sponsor:

#2 RobTheBloke   Crossbones+   -  Reputation: 2340

Like
0Likes
Like

Posted 17 January 2013 - 09:10 PM

glMapBuffer.

 

 

For lots of random access, it's going to be much slower than a local copy. For occasional access of specific chunks (rather than individual verts), it *may* be acceptable, but that depends on your needs.



#3 irreversible   Crossbones+   -  Reputation: 1316

Like
2Likes
Like

Posted 17 January 2013 - 11:02 PM

You can speed things up considerably by using asynchronous access if you don't need to get things done in the same frame.



#4 Lil_Lloyd   Members   -  Reputation: 287

Like
0Likes
Like

Posted 18 January 2013 - 02:51 AM

Good starting points for some research, thanks both of you! 



#5 larspensjo   Members   -  Reputation: 1526

Like
0Likes
Like

Posted 18 January 2013 - 07:49 AM

The fact that you need access to this data implies that you will use it for something. Maybe you already considered the possibility to use a shader to do the computing?

 

Shaders doesn't have to produce graphical results only.


Current project: Ephenation.
Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

#6 Lil_Lloyd   Members   -  Reputation: 287

Like
0Likes
Like

Posted 19 January 2013 - 10:03 AM

That's a good point larpensjo, coincidentally this evening I have started to render a terrain via heightmap using a texture look up in a shader as opposed to writing the y co ordinate values to a vertex buffer. Thusly, I am able to use the same heightmap to look up where an object should be placed. 



#7 mhagain   Crossbones+   -  Reputation: 7866

Like
0Likes
Like

Posted 21 January 2013 - 07:51 AM

If you read back from a buffer object what is going to kill you will not be bandwidth, it will not be volume of data read back, it will be CPU/GPU synchronization.  Because the CPU and GPU operate asynchronously, and because there may still be pending draw calls using that buffer object, it will be necessary for all pending operations to be flushed and completed before the readback can return.  This may potentially take up to 3 or so frames worth of time to complete, and will be true whether you read back one byte or 100mb.

 

IMO any requirement to read back from a GPU resource is more indicative of a design flaw than anything else.  The whole point of GPU resources is to keep data that the GPU needs to use a lot in memory that is local to the GPU, so rebuilding your design around that behaviour and thinking is essential (there are of course exceptions to this rule, such as screenshots).


It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS