Advertisement Jump to content


This topic is now archived and is closed to further replies.


ATI and complete VBO weirdness

This topic is 5463 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

after playing around with it a long time i just found something, thats even worse than the bad performance with anything but float.
float* p= (float*)glMapBufferARB(GL_ARRAY_BUFFER_ARB, GL_WRITE_ONLY_ARB);
for (int i=0; i<PS*PS; ++i) {
unsigned char* pq=(unsigned char*)p;
for (int poi=0; poi<PS*PS; ++poi) {
  unsigned char y=(float)Heightmap[(pz+z)*MapSize +px+x]/512;
so far, one would expect to have a bunch of floats, followed by a bunch of bytes. and after recent experiences completely lousy speed. when rendering, it doesnt matter if i use: glVertexAttribPointerARB(1, 1, GL_FLOAT, false, 0, 0); or glVertexAttribPointerARB(1, 1, GL_UNSIGNED_BYTE, false, 4, 0); both work fine and fast, whereas glVertexAttribPointerARB(1, 1, GL_UNSIGNED_BYTE, false, 0, 0); produces chaos and is slow like hell. this looks, like the value was automatically converted to float when storing it and the pointer incremented by 4 instead of 1. also when rendering it seems to ignore the type and just use float. doing the same with int instead of char has the same result, just that using GL_INT as type wont draw (or rather, draw nonsense) and is slow. so it seems it was again stored as float. anybody has noticed something like that with their radeon and newer drivers? it was strange enough, that i was forced to use floats for colors etc. but now i wonder if my understanding of c is completely flawed (and the pointer casting doesnt work) or if ati is doing some really strange thing (they actually DO have quality control in their driver department, dont they?) edit: ok, so unmapping, mapping again as unsigned char* and jumping to the right position would at least store the bytes as bytes. ie: when rendering GL_FLOAT screws up, while byte looks right. just that its back to snail mode. i really cant believe it should be impossible to use other types than float with vbo, especially since i never noticed any problems with it on my old gf3 and if they come from a simple va there arent any problems. so even if conversion would cause a speed hit it obviously cant be that big (talking about a factor of pretty much 100). in other words: is anyone using vbo on a radeon (9800 would be perfect), storing something besides float (for example unsigned bytes for color) and doesnt get slapped for it by driver/card/god? [edited by - Trienco on February 6, 2004 2:38:49 PM]

Share this post

Link to post
Share on other sites

  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!