Feel free to define those however you want or think of better ones allowing for a better structure, so the following is my interpretation I am using in my own OpenGL based engine.
Model: Contains several meshes with their material (so when rendering I set a material and render the vertices supposed to use it as that is basicly how OpenGL wants it). It may also contain different LOD stages. It can be loaded from a file.
Mesh: One vertex/index buffer pair per mesh.
Node: A node is just a position in the game world which may be part of a more complex scene tree and be associated with one or more models/lights/cameras/...
Material: Holds a shader and render states and maybe some special shader variables or something.
Entity: Is a node with a model and a skeleton
Skeleton: A bone structure and pose also holding a pointer to its animation data.
The Animation is a bit tricky. I put my skeleton with animation data (only supporting skeletal animations) into the entity, as that allows me to have different animation states for different entities sharing the same model. For vertex based animations I would probably make them part of the mesh, storing some frame variable in the entity, doing the blending between frames in a shader.
When exporting an object from blender to my custom file format, the resulting files are skeleton and animation data and the model. The model is exported so that it contains a different mesh for each texture.
This is how I do it and there a probably better ways to handle this.
For now it is just a list, later they will probably be sorted into an octree or something similar, which should of course be an improvement but for now it is good enough to optimize the culling itself a little (for less than 1000 lights, it performs very well, using multithreading and some basic code optimizations).
My code seems to work well and looks like this now:
It´s not pretty, but I have to fix some other things before cleaning up and while my planes have a function to get it´s distance to a point, at 32*24*1024*4 calls, the function call overhead was the biggest performance hit ;)
IgnatusZuk, thanks, but your code has the exact same problem I am trying to solve: There are some cases, where the function return visibility, while the object is actually outside the view frustum. In most cases I expect this to be just very few objects, so anything more complicated for culling is not worth it.
TMarques, I may should have mentioned that I am culling spheres in a 3D frustum, so those corners are actually lines and not points, but my approach to get that V as the distance to the planes should be fine, except the thing you mentioned, where I have to check which side of the planes the center is on, which explains why now in my implementation too many lights are culled.
Thank you very much for that hint
If there are any more ideas, please post them, as I´d love to find a better way if there is one
Maybe not state of the art, but a very stable and commonly used technique for sun shadows would be cascaded shadow mapping plus Exponential Shadow Maps or Percentage Closer Filtering to make them soft. Both techniques to soften them can be realized with just a depth texture, which is good as not rendering colors is a noticeable performance improvement.
For surface acene issues, glPolygonOffset works quite good for me, alternatives I found was to scale the depth in the lightdepth projection matrix or to calculate the perfect bias by using ddx/ddy.
Using OpenGL 3.2, there are array textures available, which allows for very nice selecting of the right depthmap within the final shadow shader. You render into such an array texture in several passes or just in one, by setting the gl_Layer variable in a geometry shader. I experimented with several passes vs creating the data in a geometry shader and it turned out, that the speed differences is very small (geometry shader was slightly faster) and that geometry shaders creating geometry and instancing don´t seem to like each other very much (slower than rendering in several passes). In several papers they proposed to draw with instancing to remove the passes and just using a geometry shader to pass the geometry and selecting the layer instead of creating and culling new geometry. I also found some AMD extension to set gl_Layer in the vertex shader.
Kinda state of the art seem to be Sample Distribution Shadow Maps, but it seems to come with some not much, but still noticeable quality changes when moving the camera.
For point lights, you can render a depth cubemap in just one pass using instancing or a geometry shader to doublicate the geometry and slecting the layer, just as for the sun shadow splits.
Most of those sources are based on DirectX, but can be easily adapted to OpenGL.
This is my result using 4 splits, each 1024^2, 24bit depth, using 2*2 PCF and enabled GL_TEXTURE_COMPARE_MODE. I create the splits such, that their part of the view frustum to cover always fits without having to change it, as this allows them to be completely flicker free as long as the light source does not move.
I would give you numbers for my tests, but unfortunately I didn´t write them down.