OpenGL BSP trees with modern OpenGL

Recommended Posts

I've been working on making my own OpenGL rendering engine for first-person based game.

For simplicity's sake, let's assume I'm rendering a large static scene which contains both opaque and transparent objects.

From what I understand, I have to sort my transparent objects back to front in order for them to be rendered correctly.

If I heard correctly, BSP trees are the way to do this as long as my scene is static.

However, BSp trees seem to sort the individual polygons in my scene, rather than the individual objects.

Is this correct? If not, how do I get the correct "object order" for objects in my scene.

I have only constructed a basic rendering loop in which I store all my geometry in one large VBO, which I then draw in one go using indexed drawing.

How would I be able to tell OpenGl in which order it should render the individual polygons, without having to buffer my vertex/index data over and over again?

If anyone could shed some light on this, or point me in the direction of some good examples using modern OpenGL, I would be forever in your debt ;-).

Share on other sites
However, BSp trees seem to sort the individual polygons in my scene, rather than the individual objects.

Is this correct?

BSP doesn't care about what you store in its nodes. It might be single polygon, or groups of objects, or whatever you might want to sort.

how do I get the correct "object order" for objects in my scene.

You check for sign of dotproduct between splitting plane's normal and cam's direction vector.

Depending on that sign you decide which next node to traverse first - either front or back half-space.

Share on other sites

I just iterate through everything (game objects) I know that I know are visible and throw them into a separate binary tree (every frame) using their squared distance as each node's value with a pointer to the object. Then when I render, I just walk through the binary tree from furthest to nearest. FYI: The reason I used a squared distance to populate the tree is to avoid using a sqrt.

Share on other sites
A BSP is not the preferred method for rendering translucent objects.

Use a standard render-queue and sort full objects (sub-meshes) back-to-front.

How would I be able to tell OpenGl in which order it should render the individual polygons, without having to buffer my vertex/index data over and over again?

It doesn’t matter if the vertices are all in 1 single large vertex buffer. You need multiple index buffers, 1 for each sub-mesh (a single draw call of a translucent part of the overall model). I don’t know what you mean by “buffering” your data. Once the VBO and IBO’s are created buffering is done. You just use a render-queue to decide the order in which to draw everything.

L. Spiro

Share on other sites

A BSP is not the preferred method for rendering translucent objects.

Use a standard render-queue and sort full objects (sub-meshes) back-to-front.

Ok, so basically: Just take your translucent (sub)meshes, order them back to front and render then in that order. But exactly how do I correctly order them back to front in a reliable manner if BSP trees can't be used?

I've read on or two books on OpenGL and game engine development, but some basic concepts about scene management, rendering phases or other practical approaches for rendering a complete scene always seem to be missing.Can anyone point me to any good resources that tackle these problems specifically?

Share on other sites

Ok, so basically: Just take your translucent (sub)meshes, order them back to front and render then in that order. But exactly how do I correctly order them back to front in a reliable manner if BSP trees can't be used?

Depends..

Firstly, for it to work entirely correct your submeshes must be convex and non-intersecting (or they would need to have their polygons sorted too for perfect close-ups), though if they're nearly so you might get away with it anyway. Then after that you could sort based on Z or distance to the camera, from the closest point on the objects bounding-sphere/box.

Share on other sites

Thanks, sounds easy enough. I'll try to avoid using concave translucent objects for now and just focus on getting everything working with a simple back to front ordering mechanism based on the distance to the camera.

Share on other sites

If you draw planar stuff like windows it can be good to use something like a BSP, for example in the attached image with an orange and a blue window. If you draw more compact objects it's not an issue.

You probably don't have to worry about this case, but for static geometry then everything on the close side of the object's plane should be drawn before everything on the far side. So in this case either the blue window has to be drawn first because it's on the same side of the orange window as the camera, or the orange window would have to be split in two objects, along the blue window plane, and the half on the same side of the blue plane as the camera would be drawn first, then the blue window, then the part on the back side.

That's the basic theory of the BSP for static geometry, the geometry is preprocessed to split all objects that cross another object plane, so in the actual game it can never happen because the objects never cross each other. That's also why they're not used much anymore as there are often better ways for such high polygon models as are used nowadays. In Quake 1 it made perfect sense as there were only a few planes.

Edited by Erik Rufelt

Share on other sites

A BSP is not the preferred method for rendering translucent objects.

Why not? Isn't this the main (or first) reason BSP was invented - to allow for fast back-to-front traversal of arbitrary geometry based on camera position, without having to sort the geometry every frame?

The only problem with BSPs is that they're expensive to build in real-time. But for static geometry, for which the BSP can be built offline, aren't they still the best choice?

And unless other techniques like PVS or occlusion culling (which actually proved to be inefficient on hardware unless actually coupled with space partitioning) are also used, I'm still not convinced that sorting even dynamic geometry (convex or not) in real-time is any faster than generating BSPs in real-time.

Edited by tonemgub

Share on other sites

blue and an orange window, where the orange window is closer to the camera but should still be drawn behind the blue one, so the distance to the planes should be used instead (d0 and d1).

In this case using bsp is straightforward and cheapest. You don't need to calculate d0 and d1.

It's trivial to check which node to render first, you only need to calc single dotproduct of plane's normal and camera's direction vector.

Share on other sites

What I'm getting from the comments so far is that BSP trees do have their use when I want to draw everything in the correct order.

Whenever I look up BSP trees, everyone is talking about planes, and I do understand how I can divide up my space if my objects are just regular planes.

But let's expand on Erik's example here. What if instead of a blue plane and orange plane, I had a blue and orange box, or better yet a blue and orange teapot.

How do i create a BSP tree for something like this? Do I define a bounding box for each item and use the bounding box' planes for creating my BSP tree, or is there some other accepted way of doing this?

I know my questions are touching on some really basic stuff here, but I can't seem to find a straightforward answer to this. Some real life example of a scene containing some 3D objects that is partitioned step by step into a BSP tree would seriously come in handy.

Share on other sites

What a BSP really does is divide your entire world into convex volumes. If your objects have bounding-boxes that never intersect you can sortof build a BSP using just the bounding-boxes... but.. the boxes will get split even if they don't intersect if you build a tree of an entire map.. so.. it doesn't really make sense anymore.

(You can build a BSP using the triangles in a teapot, but that makes even less sense).

So, I think in your case you don't want a BSP. BSPs are all about planes, binary space partitioning, binary in the sense of every partition divides into exactly two. A plane divides space into two convex (possibly infinite) volumes, while a box divides it into one convex volume inside the box, and a concave volume outside the box. A BSP of the faces of a box actually creates 7 convex volumes, 6 outside the box and 1 inside the box.

Share on other sites

BSP trees allow to easily traverse the scene in back to front order from any point in space, so you can render a scene without z buffering.

For a modern renderer BSP trees are too complex and restrictive. Z buffering is a lot more flexible.

Share on other sites

Why not? Isn't this the main (or first) reason BSP was invented - to allow for fast back-to-front traversal of arbitrary geometry based on camera position, without having to sort the geometry every frame?

BSP’s were useful in the days when geometry wasn’t so complex.
If your goal is accurate rendering, you will cut triangles, increasing the triangle count, and slice geometry into multiple parts, increasing the draw-call count.

Even if you build a BSP tree for static geometry, geometry these days will be so heavily subdivided you will definitely create slithers and you will have so many draw calls it simply won’t be worth it to both traverse (which is in itself slower than simply running over a pre-sorted list) and draw through the BSP tree.

But for static geometry, for which the BSP can be built offline, aren't they still the best choice?

I said “pre-sorted” above because any intelligently designed render-queue will take advantage of frame-to-frame temporal coherence.
From 1 frame to the next, objects will almost always have the same sort order.
The way you populate the render-queue will be consistent from frame-to-frame, and since you don’t sort the actual objects in the render-queue, but rather the indices to objects in the render-queue, you can keep the indices in sorted order from last frame, and use an insertion sort to go over the already-sorted list in O(n) time. Objects usually only change orders between frames 1 or 2 at a time, so the sort has virtually nothing to do.

After that you draw items by simply running over the array.

There is virtually no way a BSP tree can beat such a system. The simple act of traversing a BSP tree incurs many more draw calls, dot products, branches, and an explicit stack (to avoid recursion).

A BSP tree is only useful if you need accurate results and performance isn’t an issue.  A correct render-queue will always be faster, but there can be artifacts.  We are talking about a first-person-shooter here, not a glass-house simulation, so a render-queue is the appropriate solution here.
Besides that, if we are talking about Direct3D 11 or OpenGL 4.0, compute shaders and order-independent rendering would be better than BSP trees.

I know my questions are touching on some really basic stuff here, but I can't seem to find a straightforward answer to this. Some real life example of a scene containing some 3D objects that is partitioned step by step into a BSP tree would seriously come in handy.

Look at the link I posted.
http://en.wikipedia.org/wiki/Binary_space_partitioning#Generation
Notice how it divides D into D1, D2, and D3, and also divides B and C?.
If you don’t understand how to render back-to-front, you probably can’t handle stable division of polygons.
Even if you can, the fact that you have to slice polygons should be a red flag for you for 2 reasons:
#1: It’s not stable on computers. You will create slithers and these slithers could cause unstable slicing elsewhere etc.
#2: You are creating more triangles to render. This is bad. Sometimes you will have an entire draw call just to render 2 or 4 extra triangles sliced off the edge of a bigger polygon. This is the worst way to use a graphics card.

Not only is a render-queue easier to implement, it will provide faster renders with no extra geometry.
You should be able to find resources on render-queues via Google.

L. Spiro Edited by L. Spiro

Share on other sites

BSP’s were useful in the days when geometry wasn’t so complex.

This is a good point, but from what I've seen, you can still do pretty neat things with BSPs.

If your goal is accurate rendering, you will cut triangles, increasing the triangle count, and slice geometry into multiple parts, increasing the draw-call count.

Not necessarily. I would think that BSP trees can also be used just for sorting, without necessarily having to do accurate rendering. Instead of splitting the polygons that intersect the splitting plane into the left and right branches, you can just add the same polygon to both branches. You still do the split offline, for the purpose of correct BSP compilation, but the final BSP tree will have the non-split polygons. Then, when traversing the tree, you just check that each polygon is only added to the rendering queue once. And if you still want accurate rendering, you don't have to cut all polygons - just the transparent ones that intersect with other transparent polygons, but this is usually done by CSG, so there's nothing else you have to do during BSP compilation - you'd have to do this anyway if you want correct transparency, and intersecting polygons should be a rare case anyway, because artists know to avoid it.

frame-to-frame temporal coherence

You can do this with BSPs too - you just remember which BSP cell the camera is in, and if the camera doesn't move from one frame to the other, you don't have to traverse the BSP again - you just use the geometry from the previous traversal. Also, if the camera does move, you can re-build the rendering queue only if the camera actually passed through a portal plane (from one BSP cell to another) - you can just keep the portal planes in a separate list if you want to avoid BSP traversal just to detect which portal was crossed.

since you don’t sort the actual objects in the render-queue, but rather the indices to objects in the render-queue, you can keep the indices in sorted order from last frame

You can use indexed geometry with BSP trees as well. In fact, that's exactly how the BSP tree references the actual polygons, IIRC. If all of your polygons are already in a vertx buffer, all you have to do when traversing the BSP tree is build an index buffer.

There is virtually no way a BSP tree can beat such a system. The simple act of traversing a BSP tree incurs many more draw calls, dot products, branches, and an explicit stack (to avoid recursion).

If you use indexed geometry, you can reduce the number of draw calls to 1 (for all of the opaque geometry) + number of transparent polygons. This is the minimum number of draw calls you can get away with in any case, if you want proper transparency.

Besides that, if we are talking about Direct3D 11 or OpenGL 4.0, compute shaders and order-independent rendering would be better than BSP trees.

Sorry, I'm still not convinced. Maybe I just like BSP trees too much. Besides, what would keep you from using compute shaders for BSP geometry? Although if the geomtery is static, why would you need to compute it at run-time?

geometry these days will be so heavily subdivided you will definitely create slithers

A correct render-queue will always be faster, but there can be artifacts.

render-queue easier to implement

This is actually the point I wanted to get to... The furthest I ever got with implementing BSP was a Quake III BSP loader with collision-detection I wrote from scratch. However, Quake III's BSP does have all the issues you mentioned.

So I've been reading and thinking about implementing my own BSP compiler for a few years now that gets rid of all those issues.

What I like mostly about BSP trees though, is that they're useful for collision detection - not rendering. So maybe this makes me a bit biased...

Anyway, simply because of the complexity involved in implementing proper BSP generation, I'd probably not use BSP in a professional project (unless I had a team of math geeks behind me) - and this is the only reason (at least for me).

Edited by tonemgub

Share on other sites

I've been working as a programmer for several years now, so I know my way around a programming language or two. I've been doing some basic stuff with OpenGL (I worked for a company using OpenGL for their 3D cartographic applications), but as you can all probably tell, I've still got a lot to learn.

This FPS I'm developing for the moment is more of a stepping stone to get myself acquainted with game programming. The environments I'm creating contains mostly simple convex shapes (although their poly count can get a bit high at times).

From L. Spiro's comments I've deduced that using BSP trees may be a bit overkill for the purposes I have in mind. I'm also not so thrilled about cutting into my existing polygons or in other ways complicate the geometry of my scene, so I think I'll just take L. Spiro's advice and go with a render queue implementation that does some basic sorting of my objects.

Anyway, thanks for all the help. It's nice to know that there is a large community to fall back on  .

Create an account

Register a new account

• Partner Spotlight

• Forum Statistics

• Total Topics
627668
• Total Posts
2978540
• Similar Content

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.

References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:

• By Defend
I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
* make lots of VAO/VBO pairs and flip through them to render different objects, or
* make one big VBO and jump around its memory to render different objects.
I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?

• 10
• 10
• 10
• 12
• 22