Arcsynthesis OpenGL tutorials contradict OpenGL documentation?

Started by
1 comment, last by NathanRidley 9 years, 2 months ago

From the OpenGL documentation here: https://www.opengl.org/sdk/docs/man3/xhtml/glBufferData.xml

STREAM

The data store contents will be modified once and used at most a few times.

From the Arcsynthesis tutorials here: http://www.arcsynthesis.org/gltut/positioning/Tutorial%2003.html

GL_STREAM_DRAW tells OpenGL that you intend to set this data constantly, generally once per frame.

As I understand it, the latter is held in high regard. Why then the contradiction?

Advertisement

Well... still thinking about it unsure.png

Ok, I think that both are right, but the description is somewhat unclear/missleading. The manual is more about the content in the buffer, whereas the tutorial talks more about touching the buffer. When you stream data, you often load a chunk into the buffer, use it a few times to display it , then the buffer will be refilled with new data (eg. the next image in a video stream). The content, as whole data block, is modified(loaded) once, then used a few times, but the buffer is touched every time.

Compare it to dynamic. Eg. a particle system, where only parts of the data (a few particles) is modified every frame.

I don't think they strictly contradict each other. The way buffer usage flags are worded is admittedly somewhat hard to understand and not entirely unambiguous in every way. That's of course intentional, too, since the usage flags are very general hints, and not mandatory (the implementation may totally ignore what you tell it).

STREAM usage suggests that you upload data (once), then use it once or maybe twice (e.g. for drawing something) and then don't need it any more. On the next occasion, very soon (usually the next frame), you will do the same thing, but with new data. Think of presenting a video. Once displayed, the old frame isn't very interesting any more, you likely want to display a different one next time.

That is, in other words, more or less exactly what Arcsynthesis says ("set data [...] generally once per frame"), and it is in some way what the official docs say, too ("modified once and used at most a few times").

Why would OpenGL want to know anyway? The implementation/driver might choose not to allocate a dedicated block of GPU memory for your data, but instead DMA it in right when it's being used. Since you're only going to use that data once (or maybe twice) that will probably do, and reserving GPU memory for data that is accessed many thousand times is more efficient.

Well... still thinking about it unsure.png

Ok, I think that both are right, but the description is somewhat unclear/missleading. The manual is more about the content in the buffer, whereas the tutorial talks more about touching the buffer. When you stream data, you often load a chunk into the buffer, use it a few times to display it , then the buffer will be refilled with new data (eg. the next image in a video stream). The content, as whole data block, is modified(loaded) once, then used a few times, but the buffer is touched every time.

Compare it to dynamic. Eg. a particle system, where only parts of the data (a few particles) is modified every frame.

I don't think they strictly contradict each other. The way buffer usage flags are worded is admittedly somewhat hard to understand and not entirely unambiguous in every way. That's of course intentional, too, since the usage flags are very general hints, and not mandatory (the implementation may totally ignore what you tell it).

STREAM usage suggests that you upload data (once), then use it once or maybe twice (e.g. for drawing something) and then don't need it any more. On the next occasion, very soon (usually the next frame), you will do the same thing, but with new data. Think of presenting a video. Once displayed, the old frame isn't very interesting any more, you likely want to display a different one next time.

That is, in other words, more or less exactly what Arcsynthesis says ("set data [...] generally once per frame"), and it is in some way what the official docs say, too ("modified once and used at most a few times").

Why would OpenGL want to know anyway? The implementation/driver might choose not to allocate a dedicated block of GPU memory for your data, but instead DMA it in right when it's being used. Since you're only going to use that data once (or maybe twice) that will probably do, and reserving GPU memory for data that is accessed many thousand times is more efficient.

Thanks, that makes things clearer!

This topic is closed to new replies.

Advertisement