Keeping render objects and visibility determination separated

Started by
15 comments, last by jmakitalo 12 years ago

In my engine I made the actual draw call a virtual method of Material object instead of Mesh object.
I.e.
Mesh::display() calls scheduleRender, the latter appends small RenderData structure to render list. RenderData contains links to index and vertex buffers, backlinks to mesh and material, sorting keys and some other data
After sorting etc. a Material::render is called
The reason is, that only material instance (which manages shaders) knows, what actual data it requires (vertexes, normals, texcoods, colors, all kinds of matrices, special textures, lights, reflections, sky, aerial thickness etc. etc.)


That is a good insight.

With this in mind, I have redesigned what I pasted earlier. So what if CDrawable would consist of virtual functions for binding all sorts of arrays. It would allow passing shader varying and uniform locations.

This gets a bit lengthy, but hopefully it is not too messy.

The beef is in the function drawDrawables, which can draw culled and sorted object instances while shaders, geometry and culling are well decoupled in the code.




// cullable.hh

// Just a very basic class to be used by a quadtree or some other vis.
class CCullable
{
public:
vector3f getOrigin() const;
vector3f getBoxSize() const;
vector3f getAlignedBoxSize() const;
};

// quadtree.hh

#include "cullable.hh"

class CQuadNode
{
private:
// The outer vector is for groupping cullables by their kind.
// I want to cull meshes, particle systems and so on with a single tree.
vector<vector<CCullable*>> pCullables;
CQuadNode *children[4];
};

class CQuadTree
{
private:
CQuadNode *root;

public:
void insertCullables(const vector<CCullable*> &cullables, int group);

// Return indices to visible cullables for each group.
vector<vector<int>> frustumCull(vector3f cameraPos, vector3f cameraDir, float fov, float range);
};

// transformable.hh

// This acts as a base class for object instances.
class CTransformable
{
protected:
vector3f origin;
vector3f rotation;
vector3f scale;
bool deleted;
bool selected;

matrix44f_c trans, invTrans;

void buildMatrices();

public:
void translate(const vector3f &v);
void rotate(const vector3f &v);
void scale(const vector3f &v);

void loadMatrix();
matrix44f_c getMatrix();
matrix44f_c getInvMatrix();
};

// These are useful for editing a scene.
vector<CTransformable*> getSelectedTransformables(const vector<CTransformable*> &t);
void translateTransformables(const vector<CTransformable*> &t, vector3f v);
void rotateTransformables(const vector<CTransformable*> &t, vector3f v);
void scaleTransformables(const vector<CTransformable*> &t, vector3f v);

// shader.hh

// This acts as a base class for specific shaders.
// The methods here allow the abstract function drawDrawables to tell
// drawables, what data the shader requires.
class CShaderBase : CShader
{
public:
virtual bool requireTexCoords() const;
virtual bool requireNormals() const;
virtual bool requireTangents() const;
virtual bool requireWeights() const;
virtual bool requireBoneindices() const;
virtual bool requireTexture() const;

virtual GLint getTangentLoc() const;
virtual GLint getWeightLoc() const;
virtual GLint getBoneIndexLoc() const;

virtual void bindTexture(CTexture *pTex);

virtual void loadObjectSpaceUniforms(matrix44f_c invTrans);
};

// drawable.hh

#include "transformable.hh"
#include "shader.hh"

// This is a base class for drawable object instances, that may
// share some common data. A drawable is allowed to consist of groups.
// The idea is that each group is associated with one texture/material.
class CDrawable : public CTransformable
{
public:
virtual int getNumGroups();
virtual void bindGroupVertexBuffer(int index);
virtual void bindGroupNormalBuffer(int index);
virtual void bindGroupTexCoordBuffer(int index);
virtual void bindGroupTangentBuffer(int index, GLint loc);
virtual void bindGroupWeightBuffer(int index, GLint loc);
virtual void bindGroupBoneBuffer(int index, GLint loc);
virtual void bindGroupIndexBuffer(int index);
virtual void drawGroup(int index);
virtual CTexture *getGroupTexture(int index);

// Allow the drawable to load uniforms for a shader.
// These can be e.g. bone matrices for skinned meshes.
virtual void loadGroupUniforms(int index, GLint *locArray, int numLoc);

// Returns an integer that can be used to sort group bindings.
// For all drawables with the same data id, bindGroup is called only
// once per group index and only drawGroup is invoked for each such group.
virtual int getDataID();
};

// Sort indices by data id. The indices refer to the drawables vector.
void sortDrawables(const vector<CDrawable*> &drawables, vector<int> &indices);


// The indices point to the drawables vector, but indices.size() may be less than
// drawables.size() as some may be culled. Indices should also be sorted by data ID.
void drawDrawables(const vector<CDrawable*> &drawables, const vector<int> &indices, CShaderBase *sh)
{
int prevDataID = -1;

for(int i=0; i<indices.size(); i++){
CDrawable *p = drawables[indices];
int dataID = p->getDataID();

sh->loadObjectSpaceUniforms(p->getInvMatrix());

// This function is a simplified version of what would be used.
// As the indices may be sorted by dataID, one should loop through
// groups first and then object instances and not vise versa as in here.

for(int j=0; j<p->getNumGroups(); j++){

// If object source data changes, bind the new data.
if(dataID!=prevDataID){
p->bindGroupVertexBuffer(j);
if(sh->requireTexCoords())
p->bindTexCoordBuffer(j);
if(sh->requireNormals())
p->bindNormalBuffer(j);
if(sh->requireTangents())
p->bindTangentBuffer(j, sh->getTangentLoc());
if(sh->requireWeights())
p->bindWeightBuffer(j, sh->getWeightLoc());
if(sh->requireBoneIndices())
p->bindBoneIndexBuffer(j, sh->getBoneIndexLoc());
if(sh->requireTexture())
sh->bindTexture(p->getGroupTexture(j));
}

p->loadGroupUniforms(j, sh->getUniformLocArray(), sh->getNumUniformLoc());

glPushMatrix();
p->loadMatrix();

p->drawGroup(j);

glPopMatrix();
}

prevDataID = dataID;
}
}

// mesh.hh

// I don't want to make this module and class to be involved with
// any of the above modules. Coupling is done in the engine module.
class CMeshInstance
{
protected:
CMesh *pData[maxLevelsOfDetail]; // These are pointers to the actual shared data.
int currentLevelOfDetail;

public:
int getNumGroups();

void bindGroupVertexBuffer(int index);
void bindGroupNormalBuffer(int index);
void bindGroupTexCoordBuffer(int index);
void bindGroupTangentBuffer(int index, GLint loc);
void bindGroupWeightBuffer(int index, GLint loc);
void bindGroupBoneBuffer(int index, GLint loc);
void setGroupSkinMatricesUniform(int index, GLint loc, int maxBones);
void bindGroupIndexBuffer(int index);
void drawGroup(int index);
};

// engine.hh

class CShaderStaticMeshDirLight : public CShaderBase
{
// Implement the virtual routines here.


// e.g.
vector3f cameraPosWorldSpace;
vector3f lightDirWorldSpace;

void loadObjectSpaceUniforms(matrix44f_c invTrans);
};

#include "quadtree.hh"
#include "drawable.hh"
#include "mesh.hh"

// This couples the mesh instance to everything else.
class CMeshObject : public CCullable, public CDrawable, public CMeshInstance
{
public:
// Not sure if this hack works. Mesh sorting would be based on data address.
int getDataID(){
return (int)pData[currentLevelOfDetail];
}

void loadUniforms(int index, GLint *locArray, int numLoc){
setGroupSkinMatricesUniform(index, locArray[0], 50);
}
};

class CEngine
{
private:
CQuadTree *qt;

vector<CMeshObject*> meshObjects;

CShaderStaticMeshDirLight *shaderStaticMeshDirLight;

public:
bool init();
void draw();
};

bool CEngine::init()
{
// ...

qt->insertCullables(meshObjects, 0);

// ...
}

void CEngine::draw()
{
// Frustum cull all objects with quadtree.
vector<vector<int>> objectIndices;
objectIndices = qt->frustumCull(cameraPos, cameraDir, fov, range);

// Sort visible mesh objects according to data ID:s.
vector<int> &meshIndices = objectIndices[0];
sortDrawables(meshObjects, meshIndices);

// Draw meshes visible & sorted meshes and use directional light shader.
shaderStaticMeshDirLight->bind();
shaderStaticMeshDirLight->setCameraPos(cameraPos);
shaderStaticMeshDirLight->setLightDir(lightDir);

drawDrawables(meshObjects, meshIndices, shaderStaticMeshDirLight);

shaderStaticMeshDirLight->unbind();
}


Edit: a few typos from the code.
Advertisement
I do not understand, how do you plan to get object indices from cullables? If I understand correctly those will be distributed between Quadtree nodes and thus the index inside single node cullable list does not match the global index.

// Not sure if this hack works. Mesh sorting would be based on data address.
int getDataID(){
return (int)pData[currentLevelOfDetail];
}

On 64 bit system you will discard higher 32 bits of address and the result may not be unique.

What is the reason that you have:
Shader::bindTexture
but
Drawable::bindTexCoordBuffer
It would seem more logical for me, if all shader binding would be done in shader and Drawable would provide only buffer locations
Lauris Kaplinski

First technology demo of my game Shinya is out: http://lauris.kaplinski.com/shinya
Khayyam 3D - a freeware poser and scene builder application: http://khayyam.kaplinski.com/

The reason is, that only material instance (which manages shaders) knows, what actual data it requires (vertexes, normals, texcoods, colors, all kinds of matrices, special textures, lights, reflections, sky, aerial thickness etc. etc.)

I disagree. The model itself is the only thing that knows everything about how it needs to be rendered.
And this is an important fact because it allows the model to switch between different vertex buffer combinations for different purposes. For example, generating a shadow map. Why would you send the normals, UV coordinates, tangents, and bitangents when all you really need to send is the vertices?

Multiple streams have some overhead, but less overhead than sending 40 bytes of unnecessary data across the bus.

Meshes know what their materials are, and materials are not to be tied too tightly to the vertex buffers etc., or else you will need an entirely new material to handle the above case when all that is really necessary is a single new shader that is shared between all meshes (all meshes will be using the same shader during shadow-map generation).

There is a logical connection between shaders and vertex buffers, since shaders are expecting input from vertex buffers, but aside from matching inputs these too are decoupled as much as possible. Any system that attempts to perform some kind of coupling between material data, shaders, and vertex/index buffers is inflexible and should be avoided. These things needs to be as dynamic as possible not only for flexibility but as mentioned above for optimization.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid


I do not understand, how do you plan to get object indices from cullables? If I understand correctly those will be distributed between Quadtree nodes and thus the index inside single node cullable list does not match the global index.


You're right, that would not work. So maybe I just throw away the cullables class and use plain indices:


class CQuadNode
{
private:
// Vector for each group.
vector<int> *indices;
CQuadNode *children[4];
bool leaf;
};

class CQuadObject
{
vector2f origin;
vector2f boxSize;
int seq;
};

class CQuadTree
{
private:
CQuadNode *root;
int seq;
int ngroups;

// Vector for each group.
vector<CQuadObject> *objects;

// Indices that have passed the last operation.
// Vector for each group.
vector<int> *indices;

public:
bool allocateGroups(int n);

// Insert to objects vector.
bool insertObjects(const vector<CQuadObject> &_objects, int group);

// Perform frustum culling. Results are stored to indices.
void frustumCull(vector3f cameraPos, vector3f cameraDir, float fov, float range);

void lineIntersect(vector3f s, vector3f e);
void cylinderIntersect(vector3f s, vector3f e, float rad);

// Get index vector to query results from last operation.
vector<int> &getIndices(int group);
};


I also changed it so that e.g. frustumCull does not return an index vector, but each quadtree query holds the result in the indices vector. A reference to this can then be obtained by using getIndices. This avoids copying the vector.


On 64 bit system you will discard higher 32 bits of address and the result may not be unique.


Ok, so using long int should do?


What is the reason that you have:
Shader::bindTexture
but
Drawable::bindTexCoordBuffer
It would seem more logical for me, if all shader binding would be done in shader and Drawable would provide only buffer locations


Well, Drawable::bindTexCoordBuffer binds the texture coordinate buffer, as the mesh knows the location of the buffer, but the shader
does not. The mesh has the texture, but it doesn't know where to bind this, but the shader does, so drawDrawables can then
get the texture from the mesh group and call Shader::bindTexture. But your suggestion sounds good,
I will consider it.

I disagree. The model itself is the only thing that knows everything about how it needs to be rendered.
And this is an important fact because it allows the model to switch between different vertex buffer combinations for different purposes. For example, generating a shadow map. Why would you send the normals, UV coordinates, tangents, and bitangents when all you really need to send is the vertices?


I think that my approach allows to take this into account quite well. One just calls




drawDrawables(meshObjects, meshIndices, shaderStaticMeshDirLight);



for drawing meshes with directional light and




drawDrawables(meshObjects, meshIndices, shaderStaticMeshShadowmap);



for drawing things for shadowmap. For static meshes a shader is probably not needed, but for skinned meshes it is. Also,


the shadowmap shader can pass false for all the require* methods, since indeed, only vertex locations are needed.


[quote name='Lauris Kaplinski' timestamp='1334154315' post='4930258']
The reason is, that only material instance (which manages shaders) knows, what actual data it requires (vertexes, normals, texcoods, colors, all kinds of matrices, special textures, lights, reflections, sky, aerial thickness etc. etc.)

I disagree. The model itself is the only thing that knows everything about how it needs to be rendered.
And this is an important fact because it allows the model to switch between different vertex buffer combinations for different purposes. For example, generating a shadow map. Why would you send the normals, UV coordinates, tangents, and bitangents when all you really need to send is the vertices?
[/quote]
I do not have to send normals, unless needed. What happens is:

  • Mesh creates render structure, which lists available buffers
  • Material knows which buffers it needs and sends only those to pipeline

In my case single material can use more than one shader - depending on requested rendering type. For example shadow map is usually done by simple depth shader invoked by various materials. But still there are differences:

  • Plain color or simple textured material only has to send vertexes to depth shader
  • Textured masked material has to send UV coordinates and textures to textured depth shader because it needs transparency mask
  • Everything animated in vertex shader needs specialized depth shader
  • etc.


Meshes know what their materials are, and materials are not to be tied too tightly to the vertex buffers etc., or else you will need an entirely new material to handle the above case when all that is really necessary is a single new shader that is shared between all meshes (all meshes will be using the same shader during shadow-map generation).

I found that the data managed by mesh objects (vertex and index buffers) is much more simple and homogenous than the data required and managed by materials/shaders. Thus I found it much easier to move all OpenGL state code into materials and keep meshes as mostly dumb data containers. I do not find it inherently inflexible - the same functionality is simply implemented in another place.
Lauris Kaplinski

First technology demo of my game Shinya is out: http://lauris.kaplinski.com/shinya
Khayyam 3D - a freeware poser and scene builder application: http://khayyam.kaplinski.com/
Thanks all for your insights. I have now pretty much made the heavy changes to my engine and I think I'm pretty happy with it.

So now basically, when it comes to the original question, the quadtree is completely isolated from everything else. This decoupling is achieved basically by using just index vectors, not any base classes or pointers. Although this might not be along the C++ principles (vaguely stated), it does its job.

Now, my meshes and the like provide methods for binding separately different buffers. Shaders tell what they need and these two are combined in a drawing function which, basically takes as input the shader, a vector of e.g. mesh objects and a vector of indices to these objects. These indices can then be parsed by quadtree and state sorting routines before drawing.

This topic is closed to new replies.

Advertisement