I've made some decent progress on Spineless. Cleaning up the physics interface is nearly complete, but it's still buggy. I did some more testing on rendering the height map and noticed that I had hardcoded index buffer element type to unsigned short... so meshes with over 65k vertices caused some interesting problems. I'm fairly sure this is also the reason I got segfaults with some models a long time ago. As a temporary solution, I hardcoded it to unsigned int. :P I will probably make this automatic so that it's chosen based on the number of indices. Anyway, a heightmap with about a million polygons was rendering at a bit over 100fps, so at least Spineless isn't polycount limited now. ;) Granted, any renderer can easily do this with vertex buffers, but it was still nice to see. I'll post more screenshots of the terrain after I generate it with something a bit more intelligent than completely random points (current heightmap implementation was written in about 15min).
EDIT: Discovery: a 3072x3072 texture loads 4 times slower than a 4096x4096 one when using gluBuildMipMaps (around 15 seconds versus around 4 seconds). It's specifically this call which slows it down. I guess the texture is first copied to a 4096x4096 one before building mipmaps? Behold the power of the power of two.
Nothing new for the game project; I've been concentrating on nailing down the Spineless interface (at least for now) so I don't have to constantly adapt the engine changes.
I was going to write about the Spineless renderer earlier but I sort of forgot about it, so I'll do that now. Like all scene management in Spineless, rendering is based on arbitrary attributes of nodes. Render operations (ops), which you set as node attributes, roughly correspond to OpenGL calls, eg. "lighting", "projection" and "viewport". Each op has apply(), save() and restore() methods. Rendering is based on walking the scene graph (actually a tree) except for light nodes, which are stored in a list and applied globally (N closest lights to the viewpoint at the moment; it would be easy to change the algorithm). Then, for each node:
1) Save and apply all render ops and node transformation
2) If the node contains a renderable attribute:
- Add it to the list of transparents if it's transparent
- Else render it
3) Descend to child nodes and start from step 1 with them
4) Restore all ops and node transformation
After this, sort collected transparent nodes and render them. Of course, a lot of caching is done with attributes to speed up rendering, and there's more to do. The renderable attribute can be any object with a render() method. At the moment, Spineless comes with two such objects: Mesh and Text. I'm thinking of implementing Billboards as decorators (ie. something like node.render = Billboard(myMesh)), and particle systems will probably also plug in to the render attribute. Later, culling based on hierarchial bounding radius (ie. bounding radius of the whole subtree) will be done before step 1. I'm yet to generalize multipass rendering (technically rendering opaques and transparents could be considered multipass rendering), but I just might borrow Ysaneya's concept of pipes. It seemed like an elegant solution.
As always, comments and questions are more than welcome.
Btw. I can't think of a good name for the attribute containing the actual renderable object. At the moment it's called "visual", but I'm not sure if that's a good name since it's an adjective. "render" could be better, but I'm not sure... "renderable" would be accurate, but the name is clumsy. Suggestions?