Archived

This topic is now archived and is closed to further replies.

Bad Monkey

Please, make the bad octree stop!

Recommended Posts

Hey all, I have a quick query for anyone out there who has implemented an octree... is there an ideal number of polygons per node to stop dividing at? I have been tinkering with my engine of late, and have been less than impressed with the performance (compared to just rendering everything). I currently divide until there are less than 200 polys in a node... is this too many or too few?... I''m targeting fairly decent vid cards (GeForce class and above), but I don''t want it to totally dog out on slower cards either. I kinda get the feeling that my trees are too deep (ie I have too few polys per node), and therefore causing too much overhead during frustum culling... whadya reckon? Mind you it could be that my poor old Celeron 300A just can''t cut it anymore (you can almost hear it squeal on the days i overclock it to 464 ...)

Share this post


Link to post
Share on other sites
If you are running in a 300 MHZ CPU with a GForec card.. probably the overhead is in the CPU time, so you are dividing too much! Try testing in a faster CPU with similar graphic card... if the speed goes up dramaticaly then it`s proved my theory.

Share this post


Link to post
Share on other sites
How much you should divide really depends on the CPU/vid card combination. Beefier vid cards can handle fewer subdivisions, but faster CPUs can do more, making everything even faster. It''s all a matter of balancing it correctly on a given system. You may want some kind of variable splitting that takes into account the user''s system - if that''s even possible.

Share this post


Link to post
Share on other sites
Yeah... I was thinking along those lines... but it looks like I''m gonna have to do a hell of a lot of testing on different machines to get it right... *sigh* ...maybe there is some elusive formula I can scratch out, Terran.

Obviously, in my case (Celery 300A + GeForce2 MX) I am very much CPU bound, but I imagine it would spank along on any CPU built this millenium

I shall do a crapload of testing today (project is due tomorrow... eek), but please, anyone else, feel free to add your opinions or advice.

cheers guys

Share this post


Link to post
Share on other sites
Why don`t you publish your program to be tested by other people of this forum. From their results you could deduce the "formula" that you need.



If brute force does not solve your problem....you''re not using enough!

Share this post


Link to post
Share on other sites
That is an idea, but I can spot a couple of flaws in it:

(a) if most people are like me, they do not want to wade through a veritable shiteload of someone else's code just to help them out... (sad, but true)

(b) the engine is a work in progress, and I plan to modify it extensively once I have finished uni in a few weeks (and when I say modify, I mean completely tear it down and take a different approach to the problem... and probably extend it to using a KD-tree instead), so it doesn't matter too much now.

I will keep everyone posted on anything interesting I find out though (especially regarding ways to determine when to stop su-dividing in a scene-graph)

cheers again (yes, it can be inferred that i do enjoy a pint )
Adam

Edited by - Bad Monkey on October 23, 2001 11:46:18 AM

Share this post


Link to post
Share on other sites