Drawing BoundingBoxes with problems.

Started by
6 comments, last by Tsus 12 years, 2 months ago
Hi folks,

I've got following problem. I have some .X models loaded via selfwritten assetManager that has Lists with models in it. I also Apply a Diffuse-shader to my models. All the models are "selfdrawing" in a BasicModel-class.



effect.CurrentTechnique = effect.Techniques["DiffuseLight"];
// Begin our effect
effect.Begin();
// obvious renderstates
//graphics.GraphicsDevice.RenderState.DepthBufferEnable = true;
//graphics.GraphicsDevice.RenderState.DepthBufferWriteEnable = true;
// A shader can have multiple passes, be sure to loop trough each of them.
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
// Begin current pass
pass.Begin();
foreach (ModelMesh mesh in model.Meshes)
{
foreach (ModelMeshPart part in mesh.MeshParts)
{
// calculate our worldMatrix..
world = GetWorld() * mesh.ParentBone.Transform;
// .. and pass it into our shader.
// To access a parameter defined in our shader file ( Shader.fx ), use effectObject.Parameters["variableName"]
Matrix worldInverse = Matrix.Invert(world);
Vector4 vLightDirection = new Vector4(0.0f, 0.0f, 1.0f, 1.0f);
effect.Parameters["matWorldViewProj"].SetValue(world * camera.view * camera.projection);
effect.Parameters["matInverseWorld"].SetValue(worldInverse);
effect.Parameters["vLightDirection"].SetValue(vLightDirection);
// Render our meshpart
graphics.Vertices[0].SetSource(mesh.VertexBuffer, part.StreamOffset, part.VertexStride);
graphics.Indices = mesh.IndexBuffer;
graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList,
part.BaseVertex, 0, part.NumVertices,
part.StartIndex, part.PrimitiveCount);
}

}
// Stop current pass
pass.End();
}
// Stop using this effect
effect.End();


Now I checked out this tutorial on how to create BoundingBoxes. Of course it doesn't work smile.png

1. The render method needs a view and a projection-Matrix. Are those the view and projection-matrices I use in my Camera-class or what are they supposed to be? I just guessed that those are Matrices from my camera class. So I had to call the BoundingBoxRenderer.render() function in Game1.Update(). It looks like this:


if (assetManager._modelsOnscreen.Count != 0)
{
foreach (BasicModel model in assetManager._modelsOnscreen)
{
foreach(ModelMesh mesh in model._model.Meshes){
sphere = mesh.BoundingSphere;
BoundingBoxRenderer.Render(BoundingBox.CreateFromSphere(sphere),this.GraphicsDevice,camera.view,camera.projection,Color.Red);
}
}
}

As you can see I also had to generate my BoundingBox out of the meshes BoundingSphere. No Idea why the Microsoft guys didn't implement mesh.BoundingBox just like they did with mesh.BoundingSphere.

2. If I just leave the code as it is there are no BoundingBoxes drawn at all. But As far as I can see no more Shaders are applied to the models. They are drawn with BasicEffect. I guess it somehow is caused by the Basic Effect that is used for theBoundingBoxRenderer.

Any advice is welcome. Perhaps somebody experienced similar problems? I use XNA 3.1 if it is important.

Thanks in advance!
Advertisement
[color=#333333]Another issue that could be of interest is, that I also tried to draw some Spritefonts on the gamescreen. It had the same effect. Shaders were not applied to the meshes.

[color=#333333]Is it possible that it somehow has to do with the order in which Shaders/Meshes, Sprotefonts, Colored vetices...are drawn? I mean, I use the BasicEffect in order to draw the BoundingBox (Vertices) and the Meshes are also being drawn with the BasicEffect applied, although they should be drawn with my Diffuse Shader.

[color=#333333]I sence some kind of connection there smile.png

Also moved render code to draw method. Now I at least see the BB. Still some issues.

1. Shaders still not working properly, like described above.

2. The BB-lines behind the Model tend to be drawn in front of the model for a split moment

[color=#333333]Thanks,
[color=#333333]plusnoir
I'm not really sure why nobody can help me with that problem. Is it a stupid or a hard question?

Well after one week of messing around with that problem managed to finaly draw the bounding-boxes. The shader issue persists though. I will try to describe it more precise this time.

Here is my BasicModel.cs. It is a little messy but the functionality should be clear. It is responsible for drawing the Model and the corresponding bounding-box.
It contains a method called DrawModel. Here I have two possibilities implemented, drawing the model with basic effect and drawing it with a diffuse light (taken from here).
I also have a method DebugDraw which uses a basic effect in order to draw the bounding box.
(really sorry for posting the entire class, it is a mess I admit)


namespace XNA3Editor
{
public class BasicModel : DrawableGameComponent
{
public Model model { get; protected set; }
public String name { get; set; }
public String id { get; set; }
public float yawAngle { get; set; }
public float pitchAngle { get; set; }
public float rollAngle { get; set; }
Matrix rotation = Matrix.Identity;
Vector3 vRotation = Vector3.Zero;
Matrix translation = Matrix.Identity;
Vector3 position = Vector3.Zero;
Vector3 direction = new Vector3(0, 0, 0);
private BasicEffect renderer;
private Effect effect;
private BoundingBox aabb { get; set; } //axis-aligned boundin box
private Camera camera;
private short[] indexData; //The index array used to render the AABB
private VertexPositionColor[] aabbVertices; //The AABB vertex array used for rendering
protected Matrix world = Matrix.Identity;
public BasicModel(Game game,Model m, String name) : base(game)
{
model = m;
this.name = name;
// Vertex declaration for rendering our 3D model.
Game.GraphicsDevice.VertexDeclaration = new VertexDeclaration(Game.GraphicsDevice, VertexPositionNormalTexture.VertexElements);
camera = ((Editor)game).camera;
}
private void CreateAABB(Model model)
{
aabb = new BoundingBox();
foreach(ModelMesh mesh in model.Meshes)
{
//Create an array to store the vertex data.
VertexPositionNormalTexture[] modelVertices =
new VertexPositionNormalTexture[mesh.VertexBuffer.SizeInBytes /
VertexPositionNormalTexture.SizeInBytes];
//Get the models vertices
mesh.VertexBuffer.GetData<VertexPositionNormalTexture>(modelVertices);
//Create a new array to store the position of each vertex
Vector3[] vertices = new Vector3[modelVertices.Length];

//Loop throught the vertices
for (int i = 0; i < vertices.Length;i++)
{
//Get the position of the vertex.
vertices = modelVertices.Position;
}
//Create a AABB from the model's vertices.
aabb = BoundingBox.CreateMerged(aabb,
BoundingBox.CreateFromPoints(vertices));
}
}
protected override void LoadContent()
{


effect = Game.Content.Load<Effect>("Effects/Diffuse");
//Create the bounding box from the model's vertices
CreateAABB(model);
base.LoadContent();
}
public virtual void Update()
{
// Rotate model
rotation *= Matrix.CreateFromYawPitchRoll(yawAngle,
pitchAngle, rollAngle);
}
public override void Draw(GameTime gameTime)
{

//Create a new vertex declaration
Game.GraphicsDevice.VertexDeclaration =
new VertexDeclaration(Game.GraphicsDevice,
VertexPositionColor.VertexElements);

//Draw bounding box.
DebugDraw(aabb,Color.White);

//Draw model.
DrawModel(model);

base.Draw(gameTime);
}
private void DrawModel(Model model)
{
#region draw model with diffuse effect

Matrix[] transforms = new Matrix[model.Bones.Count];
model.CopyAbsoluteBoneTransformsTo(transforms);
// worldMatrix = Matrix.CreateWorld(position, rotationMatrix.Forward, rotationMatrix.Up);
// Use the DiffuseLight technique from Shader.fx. You can have multiple techniques in a effect file. If you don't specify
// what technique you want to use, it will choose the first one by default.
effect.CurrentTechnique = effect.Techniques["DiffuseLight"];
// Begin our effect
effect.Begin();


// A shader can have multiple passes, be sure to loop trough each of them.
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
// Begin current pass
pass.Begin();
foreach (ModelMesh mesh in model.Meshes)
{
foreach (ModelMeshPart part in mesh.MeshParts)
{
// calculate our worldMatrix..
//world = GetWorld() * mesh.ParentBone.Transform;
world = Matrix.CreateWorld(position, rotation.Forward, rotation.Up);
// .. and pass it into our shader.
// To access a parameter defined in our shader file ( Shader.fx ), use effectObject.Parameters["variableName"]
Matrix worldInverse = Matrix.Invert(world);
Vector4 vLightDirection = new Vector4(0.0f, 0.0f, 1.0f, 1.0f);
effect.Parameters["matWorldViewProj"].SetValue(world * camera.view * camera.projection);
effect.Parameters["matInverseWorld"].SetValue(worldInverse);
effect.Parameters["vLightDirection"].SetValue(vLightDirection);
// Render our meshpart
Game.GraphicsDevice.Vertices[0].SetSource(mesh.VertexBuffer, part.StreamOffset, part.VertexStride);
Game.GraphicsDevice.Indices = mesh.IndexBuffer;
Game.GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList,
part.BaseVertex, 0, part.NumVertices,
part.StartIndex, part.PrimitiveCount);
}

}
// Stop current pass
pass.End();
}
// Stop using this effect
effect.End();

#endregion
/*
#region draw model using basic effect
//Create a rotation matrix from GameModel's rotation.
Matrix rotatioMatrix = Matrix.CreateFromYawPitchRoll(
vRotation.X,
vRotation.Y,
vRotation.Z);
//Create the world matrix from the GameModel's position and rotation.
world = Matrix.CreateWorld(position,rotation.Forward,
rotation.Up);
foreach(ModelMesh mesh in model.Meshes)
{
foreach(BasicEffect Beffect in mesh.Effects)
{
//Set effect lightning.
Beffect.EnableDefaultLighting();
Beffect.PreferPerPixelLighting = true;
//Set effect matrices.
Beffect.World = world;
Beffect.View = camera.view;
Beffect.Projection = camera.projection;
}
mesh.Draw();
}
#endregion
*/
}
public Matrix GetWorld()
{
return world;//* Matrix.CreateTranslation(position)
}
private void SetupRenderer()
{
//Create a new Basiceffect instance.
renderer = new BasicEffect(Game.GraphicsDevice,null);
//This lets you color the the AABB
renderer.VertexColorEnabled = true;
//Set renderer matrix.
renderer.World = world;
renderer.View = camera.view;
renderer.Projection = camera.projection;
}
private void DebugDraw(BoundingBox aabb, Color color)
{
//Setup the debug renderer.
SetupRenderer();
//Create an array to store the AABB's vertices.
aabbVertices = new VertexPositionColor[8];
//Get an array of points that make up the corners of the AABB.
Vector3[] corners = aabb.GetCorners();
//Fill the AABB vertex array.
for (int i = 0; i < 8; i++ )
{
aabbVertices.Position = corners;
aabbVertices.Color = color;
}
//Create the index array.
indexData = new short[]
{
0,1,
1,2,
2,3,
3,0,
0,4,
1,5,
2,6,
3,7,
4,5,
5,6,
6,7,
7,4,
};
//Start drawing the AABB.
renderer.Begin();
//Loop through each effect pass.
foreach(EffectPass pass in renderer.CurrentTechnique.Passes)
{
//Start pass.
pass.Begin();
//Draw AABB.
Game.GraphicsDevice.DrawUserIndexedPrimitives<VertexPositionColor>
(PrimitiveType.LineList,aabbVertices,0,8,indexData,0,12);
//End pass.
pass.End();
}
//End rendering.
renderer.End();
}
public Vector3 _position
{
get { return this.position; }
set
{
world = Matrix.Identity;
world *= Matrix.CreateTranslation(this.position = value); }

}
public Model _model
{
get { return model; }
}
}
}


Now my problem is, if I draw the model using basic effect, everything looks fine. The bounding box and the Model are drwan as expected. But if I try to apply the diffuse effect to the model, it is drawn wrong. Like there is no shader on it it just looks flat.

Now the thing is in the Draw Method I have a vertex declaration. If I comment that out and remove the bounding box he actually draws the model with diffuse light on it. But it still looks wrong.

I have no idea what is going wrong there! Maybe the vertex declaration somehow screws up between the basic effect for the BB and the diffuse effect on the model?

Help would be great,
Thanks.
Hi!

It looks like your vertex declaration doesn’t fit.
Right now you use for everything VertexPositionColor. But the Model contains VertexPositionNormalTexCoord and your shader expects VertexPositionNormal.

Oh, and better do not create the vertex declaration over and over in the rendering loop. It is a d3d resource and should be created in LoadContent. Creating it once is enough. smile.png

So, let’s see.

  1. Try to create two vertex declarations, one with VertexPositionColor (for your bounding box) and the other one with VertexPositionNormalTexCoord (for the basic effect and your diffuse shader).
    It is important that you exactly tell the GPU how your vertex data is stored in the vertex buffer. Dx9 will be clever enough to see which attributes are actually used in the shader.
  2. Finally, bind the respective vertex declaration in your rendering code. (In DebugDraw the vertex declaration for VertexPositionColor and in DrawModel the other one.)

If it still doesn’t work, could you please attach a small sample project that shows the problem?
Your problem seems related to your setup of the input assembler, so in order to help you we need all kinds of information (shader code, layout of vertex data, vertex declarations, …). Most likely it will help to look at the debug output or eventually a PIX run.

Cheers!
Hey thanks for the response.

I'm still learning, so could you tell me how I would declare and bind the vertices?
Hi again!

Alright, to setup the input assembler correctly we have to consider three things.

1. The layout of the data in the vertex buffer(s).
Vertex buffers store the vertex attributes, e.g.: position, normal, color, etc…
You either place everything interleaved in a single buffer.
|Position0|Normal0|Position1|Normal1|Position2|Normal2|…

Or you create multiple vertex buffers, each containing one element.
|Position0|Position1|Position2|…
|Normal0|Normal1|Normal2|…

Third option is you mix those (sometimes that’s useful, too).

If I’m not mistaken (haven’t touched XNA in like two years…) the Model stores the data interleaved. If you render with DrawUserPrimitives or DrawUserIndexedPrimitives the vertices are also interleaved (because they are simply specified as array).
For the GPU a vertex buffer is nothing more than some piece of memory full of floats. The buffer itself has no idea what it is storing. For this reason, we need a vertex declaration, telling the GPU how to read data from the vertex buffer.

2. The vertex declaration.
A vertex declaration defines for each attribute in the vertex buffer, how the attribute is read. Therefore we specify an InputElement[], which holds one entry per element. For some basic vertex types XNA already brings some structs, which already have such an InputElement[] defined (VertexPositionColor for instance). But if you use own types, you have to write them yourself, so you should have seen this once. Let’s write it for VertexPositionNormal, which means we store for each vertex a position and the normal.
VertexElement[] VertexElements =
{
new VertexElement(0, 0, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Position, 0),
new VertexElement(0, sizeof(float)*3, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Normal, 0),
};

Alright, what do we have here? The first parameter in the VertexElement constructor is the number of the stream. If you have all your attributes in a single vertex buffer, you will likely bind that buffer to stream 0. If you have multiple vertex buffers, you bind the first to 0, the second to 1 and so on. The second parameter is the offset in the buffer. First comes the position (offset=0), next the Normal (Offset=12=sizeof(float)*3). When you bind a vertex buffer you need to know its stride, which is the size of the data for a single vertex. In our case the stride is 24 byte: position(3 floats = 12 byte) + normal(3 floats = 12 byte).
The attributes are accessed by the input assembler like this:
element_address = stride * vertexID + offset;
Lets use it in a small sample:
position0 = 24(stride) * 0(vertexID) + 0(offset) = 0
normal0 = 24 * 0 + 12 = 12
position1 = 24 * 1 + 0 = 24
normal1 = 24 * 1 + 12 = 36

Alright, having this in mind we move on to the other parameters of the InputElement constructor. The third parameter defines the type of data (Vector3 etc), then we have a usage (usually default) and the last two are the semantic name and the semantic index.
Those last two are the bridge to the shader code.

3. The shader code.
In a shader you give each variable coming into or out of the shader program a semantic, which means some sort of “meaning”.
In your shader code (at least the source you linked to) I’ve spotted this:
OUT VertexShader( float4 Pos: POSITION, float3 N: NORMAL )
Here the variable “Pos” receives the data which are labeled by our vertex declaration as “POSITION”. For completeness each semantic has a number at its end, so it should be “POSITION0”. The “0” at the end is the semantic index (last parameter in the VertexElement constructor). If it is not specified “0” is assumed by default. Well and variable “N” receives what is labeled in the vertex declaration as “NORMAL”.

In DirectX 9 you can have more attributes in your vertex buffer (and with this also in your vertex declaration) than you actually use in the shader code. Also the order in which you specify the variables in the shader code doesn’t matter, so:
OUT VertexShader( float4 Pos: POSITION, float3 N: NORMAL )
is equal to:
OUT VertexShader(float3 N: NORMAL , float4 Pos: POSITION)
This is not true for Dx10+ anymore, because this sorting isn’t done by the input assembler anymore (for performance reasons), so it is best to not pick up bad habits and sort the parameters correctly right away.

Okay, that’s all you need to know. smile.png
So, look what your vertex buffer contains and create accordingly the vertex declarations:
For the model:
VertexDeclaration _DeclPosNormalTex = new VertexDeclaration(Game.GraphicsDevice, VertexPositionNormalTexture.VertexElements);

And for the box:
VertexDeclaration _DeclPosColor = new VertexDeclaration(Game.GraphicsDevice, VertexPositionColor.VertexElements);

Creates those two in your LoadContent method and store the instances in some members of the class.
In your render code you simply set the declarations right before you submit the draw call.
Game.GraphicsDevice.VertexDeclaration = _DeclPosNormalTex;

Hope that cleared some things up. smile.png
Cheers!

Thanks a lot Tsus thats what I wanted to know. The problem is, the basic hlsl-tutorials always deal with the basic effect or a single effect at once.
Now the BB and the model are drawn with correct shaders on them.

The Lightsource still screws up it seems to somehow change position when I strafe around my object. As seen here. But I guess I somehow messed up the shader code ;)

Thanks for taking the time and writing almost a tutorial :)

The Lightsource still screws up it seems to somehow change position when I strafe around my object. As seen here. But I guess I somehow messed up the shader code ;)

Mhm. Yes, I guess so, too. smile.png
If the lighting depends on the view, the lighting calculations are often carried out in the wrong space.
Well, if you can’t figure it out, you’re welcome to show us your code. :)


Thanks for taking the time and writing almost a tutorial smile.png

No problem, you’re welcome. That’s why we’re here. smile.png

This topic is closed to new replies.

Advertisement