Archived

This topic is now archived and is closed to further replies.

Speed of VBO's vs. VAR

This topic is 5129 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

vbos are in 99% of all cases faster except you do something wrong.

the reason for this is just that vbos are stored in gfx memory so the gfx card does not need to pull the data every frame from ram, especially if used with static geometry.

Share this post


Link to post
Share on other sites
Ignore LousyPhreak.

NVidia''s VBO support is a bit flakey at the moment, as in, it kills performance streaming from more than one VBO at a time, and you can''t allocate certain sizes of memory ( whats worse is that this varies ).

VBO is however, much easier to use... Personally I''d go for VBO, just because it''s portable ( to ATI cards aswell ), and it''s the future standard.

You have to remember that you''re unique, just like everybody else.

Share this post


Link to post
Share on other sites
Ignore Python Regious, or whatever his name is.

He''s talking absolute b***s**t

Even a badly written VBO will, in most cases, out perform normal
streaming.

I''d like to see the proof of his argument ...

Share this post


Link to post
Share on other sites
His infomation on Nvidia''s VBO implentation is right afaik, with at least one of the recent drivers if you tried to allocate a buffer larger than 6meg iirc it would loose all performance. I belive it was YannL who raised the issue a little while back.

Share this post


Link to post
Share on other sites
quote:
Original post by Shag
Ignore Python Regious, or whatever his name is.

He''s talking absolute b***s**t

Even a badly written VBO will, in most cases, out perform normal
streaming.

I''d like to see the proof of his argument ...

Seems to me you''re the one making unsubstantiated claims, so I''d say the onus is on you to do the proving.

BTW, I suggest you tone down the flaming a bit. It''s bad enough when it''s in the lounge, but keep it out of the tech forums.

Share this post


Link to post
Share on other sites
quote:

Even a badly written VBO will, in most cases, out perform normal streaming.



That''s probably true (depending on what "normal streaming" actually means).

However, the original requester was asking about VBO versus the (NVIDIA specific) VAR extension. Which allows you to allocate VRAM, AGP or system RAM, lock the memory for DMA before-hand, and coupled with the fence extension, allows you very low-cost synchronization (if not taken overboard). Amazing what you can find out if you actually read the initial question, isn''t it? :-)

The answer is that VAR gives you the tools to implement something that''s morally equivalent of VBO. If you have the skills to make a high-quality buffer management implementation on top of VAR, then this is likely to perform on par with VBO on NVIDIA hardware. ATI doesn''t have VAR, and the ATI-specific extension isn''t similar to VAR (more similar to VBO) so I''d suggest going with VBO if it''s an either-or.

Regarding vertices in VRAM being faster than vertices in AGP, that''s only true if you have VRAM bandwidth to spare. If you have VRAM bandwidth to spare, that means that you''re not fill bound. If you''re not fill bound, then something''s wrong with you :-) Meanwhile, if you use all of the VRAM bandwidth for framebuffer/fill, you get an extra gigabyte per second or so of AGP bandwidth, to use in parallel for vertex streaming. That''s a pretty good argument in favor of AGP memory, rather than VRAM, I think.

Share this post


Link to post
Share on other sites
quote:
Original post by _the_phantom_
His infomation on Nvidia''s VBO implentation is right afaik, with at least one of the recent drivers if you tried to allocate a buffer larger than 6meg iirc it would loose all performance.


I have also experienced this lose of performance. Very frustrating.

Share this post


Link to post
Share on other sites
quote:
Original post by AxoDosS
I have also experienced this lose of performance. Very frustrating.


at least so far it was always extreme enough to let you know something is awfully wrong. though i prefered the bug where allocating LESS than a few mb would kill performance. wonder if nvidia will someday care to fix their vbo support or if they are just %%§& because the arb prefered atis version (just like with certain dx9 features resulting at first in lousy performance of nvidias cards)

Share this post


Link to post
Share on other sites
quote:
Original post by Shag
Ignore Python Regious, or whatever his name is.

He''s talking absolute b***s**t

Even a badly written VBO will, in most cases, out perform normal
streaming.

I''d like to see the proof of his argument ...



Ok. A thread about VBO performance and bugs.

And you''re right, VBO will in most cases ( if it''s allocated in AGP or VRAM ) out perform a standard VA implementation. However, thats not what the original poster asked.



You have to remember that you''re unique, just like everybody else.

Share this post


Link to post
Share on other sites
quote:
Original post by Trienco
quote:
Original post by AxoDosS
I have also experienced this lose of performance. Very frustrating.


at least so far it was always extreme enough to let you know something is awfully wrong. though i prefered the bug where allocating LESS than a few mb would kill performance. wonder if nvidia will someday care to fix their vbo support or if they are just %%§& because the arb prefered atis version (just like with certain dx9 features resulting at first in lousy performance of nvidias cards)



In an attempt to head off an ATi vs Nvidia war;
* Any bugs in NVs VBO impliementation has nuffin todo with them prefering one interface over another, if they picked one over the other it would be for conceptul (sp?) reasons, remember, this extension will allow for things where texture data could be dealt with in the same way, a VAR system wouldnt make sense for that to work.

* The whole DX thing was NVs fault more than anything, they tried to push the 32/16bit standard, however MS decided that 24bit was enuff, which is what ATI designed for. (i seem to recall something about NV not really taking part in the DX9 discussions as well), so in effect they brought it apon theirselves by not taking part

(before i get accused of ''fan-boy''isms, i should point out that until the 9700pro i got last year I''ve only had Nvidia cards from the TNT onwards, so i''m not blind to one side or the other)

Share this post


Link to post
Share on other sites
hehe, i still have my good old gf3 and avoided ati a long time because they have been a little too infamous for their drivers. though by now quite a few things seem to have changed. and finally needing a card with support for fragment programs: how''s your 9700 doing? as either this or a 9800 (non pro) seems to be a good trade off between price and getting all "dx9" features. also (to get back on topic) i''d be curious to compare any vbo issues ,-)

Share this post


Link to post
Share on other sites
9700pro has been great for me, i''ve had it about a year now, i''ve not had any issues with the drivers myself and there OpenGL support is muchly improved, all in all a v. good card with much to recomend it.

As for the VBO issues, i dont know of any, but thats not to say they dont exist, but then as the poster said, VBO is close to ATIs own VAO extension so i dont see why it would have any.

Share this post


Link to post
Share on other sites
Trienco I think I have emailed you before. I also have a 9700Pro and love it. VBO''s work great on it. That is why I asked the question about which is faster? I would recommend a 9700Pro if you can still find one. 9800Pro really doesnt'' have the much more performance over the 9700Pro for the money, but you do get 2.1 or 2.2 shaders though.

Share this post


Link to post
Share on other sites
quote:
Original post by MARS_999 2.1 or 2.2 shaders


Eh? No such thing, AFAIK. . . Using DirectX designations, there's PS2.0, PS2.0 Extended, and PS3.0. The Radeon 9800 has no more capabilities over the 9700 except in general performance (due to a much higher clock speed, of course). The F-Buffer (when driver support for it -eventually- comes out) is the only other difference, as it strips away the instruction limitations of the 9700.

[edited by - Ostsol on November 28, 2003 11:39:08 AM]

Share this post


Link to post
Share on other sites
well, all i find are 9700pro''s that cost as much or more as a cheap 9800 (no pro). speed seems pretty much the same and those 2.1 shaders are ati''s own numbering, so it might just hint at the f-buffer. one way or another, i need fragment program support and nvidia doesnt seem to have anything that appeals to my needs AND my wallet.

and the other thing. when i changed from var to vbo i didnt notice any difference and i wouldnt expect any program using more than extremely primitive textures to show a real difference.

mails are another problem. with about 60 spam mails per day and 1 real mail per week i tend to miss the important mails and delete them with the rest.. definitely time for another address.

Share this post


Link to post
Share on other sites
Ah. . . that''d be "SmartShader 2.1" (marketting name), certainly not pixel shader 2.1. You''re probably right, though, in that the extra .1 is just the F-Buffer.

Share this post


Link to post
Share on other sites