Jump to content

  • Log In with Google      Sign In   
  • Create Account


benp444

Member Since 23 Mar 2012
Offline Last Active Mar 22 2013 03:46 PM

Topics I've Started

D3D10CreateDeviceAndSwapChain fails in debug mode

15 March 2013 - 05:49 PM

For a few weeks I have been developing only in release mode. I recently switched back to debug mode and found my previously working code failed. I have narrowed down the issue but have found it is not only present in the code I was working on, but all code (test code, old projects, sample code etc) on my machine.

 

The problem is that when D3D10CreateDeviceAndSwapChain is called in debug mode I get an E_FAIL error code (see error box attached). I.e. the code below (taken from a Frank Luna sample) fails:

 

 

    HR( D3D10CreateDeviceAndSwapChain(
            0,                 //default adapter
            md3dDriverType,
            0,                 // no software device
            D3D10_CREATE_DEVICE_DEBUG,
            D3D10_SDK_VERSION,
            &sd,
            &mSwapChain,
            &md3dDevice) );
 

However if I change the 4th argument to 0 everything works.

 

I can only think that either my graphics card is broken or the Direct X SDK or Direct X runtime is somehow corrupt. Has anyone got any suggestions as to what should be my next steps?

 

Ben.


GPU Particle Class Design

05 October 2012 - 02:13 PM

I have gone through the GSParticles demo that comes with the DirectX SDK and studied Frank Lunas chapter on particles in his DirectX 10 book. I can get particles working and have no problem understanding the basics however I am struggling to come up with a good solution or class design such that I can create (e.g.) explosions whenever I want.

Here is a basic requirement that I want:

1) Create a particle explosion on the GPU using a geometry shader - similar to the GSParticle but the explosion will start and then complete. Lets say the explosion lasts for 3 seconds.
2) be able to create an arbitrary amount of explosions within in reason (e.g. 5) without disrupting the existing ones.
3) be able to create these explosions during any game frame I choose.
4) An example of the above would be explosion a) at t=0 secs, explosion b) at t=0.5, c) at t=2.5, d) at t=360 etc.

The pseudo code looks at bit like this:


[source lang="cpp"]class Particle { void init(); void render(); //this actually takes the elapsed and totalTime as well}Particle::init(){m_fXPosition= m_pEffect10->GetVariableByName( "g_position" )->AsVector();}Particle::render() {//update passm_fXPosition->SetFloatVector((float*)&m_position);if(firstTime) Draw();else DrawAuto();//render pass, passes in elapsed time//etc...}[/source]

This class currently works in my game as it stands e.g:

[source lang="cpp"]onDeviceInit() {m_explosion=new Particle();m_explosion.init();}onFrameUpdate() {//an explosion event occurred som_explosion.reset(); //sets first time to true;}onFrameRender() {//this is continually called whether there are particles alive or not.m_explosion.render();}[/source]

The above however fails requirement 2. If a second explostion event occurs before the existing one is complete then the existing particles will be destroyed because firstTime will be true and Draw() will be called.

My shader is very similar to GSParticles.fx and for the sake of this topic consider it the same.

I appreciate that seeding particles might be the way to go and of course GSParticle has a single seed (PT_LAUNCHER), but I can't see how I can inject seeds to an already running shader.

Bearing in mind that I don't want to create a new Particle class and call init() everytime I want an explosion, The only way I can see of doing this would be to create (e.g.) 5 particle instances and store them in an array. Then when I want an explosion I get the next instance set its position. This would allow me a maximum of 5 explosions at any given time (a 6th explosion would kill all particles in the 1st explosion).

Can anyone give me some hint as to how to create a simple particle framework using Geometry shaders in DirectX 10?

Quaternion x,y,z rotation

05 September 2012 - 05:11 AM

Firstly sorry for yet another newby post regarding quaternions but I have scoured the forums for hours without any luck. Secondly I have posted this in the Direct X section as I am hoping to get answers in the form of D3DXQuaternion... method calls rather than a load of maths which may only hinder my progression.

Anyhow my problem is fairly straightforward. I have a simple model viewer which shows my model in the middle of the screen. When I press the keys X,Y or Z the model should rotate in MODEL coordinates. I am trying to use Quaternions to do this.

The results are that I can only get a single axis to rotate in model space, the other two will subsequently rotate in world space. My code is:

D3DXQUATERNION qZ;
  D3DXQUATERNION qY;
  D3DXQUATERNION qX;
  D3DXQUATERNION qOrient;
  D3DXQUATERNION qTotal;

  D3DXQuaternionIdentity(&qZ);
  D3DXQuaternionIdentity(&qY);
  D3DXQuaternionIdentity(&qX);
  D3DXQuaternionIdentity(&qOrient);


  D3DXVECTOR3 axisZ(0,0,1);
  D3DXVECTOR3 axisY(0,1,0);
  D3DXVECTOR3 axisX(1,0,0);
  D3DXQuaternionRotationAxis(&qZ,&axisZ,ga->getRotation().z);
  D3DXQuaternionRotationAxis(&qY,&axisY,ga->getRotation().y);
  D3DXQuaternionRotationAxis(&qX,&axisX,ga->getRotation().x);

  //I am not sure if this is neccessary.
  D3DXQuaternionNormalize(&qZ,&qZ);
  D3DXQuaternionNormalize(&qY,&qY);
  D3DXQuaternionNormalize(&qX,&qX);

  //The first quat will rotate correctly in model space, the latter 2 will rotate in world space.
  qTotal=qY*qX*qZ;

  D3DXMATRIX rotation;
  D3DXMatrixRotationQuaternion(&rotation,&qTotal);
  world=scale*rotation*move;

Note that ga is an instance of GameActor and ga->getRotation() returns a D3DXVECTOR3 containing x,y and z rotations in radians.

I would be grateful if anyone could give me a high level approach to solve my problem?

Ben.

interpolation of normal on low polygon useless

12 July 2012 - 06:42 PM

I have created a tetrahedron using 4 vertices and 12 indices. I have then calculated the normals for each vertex and passed it to the shader to do ambient and diffuse lighting. The results are shown in the picture below and as you can see I get dark sections where I expected lit sections:

Attached File  tetra bad shade.png   12.97KB   31 downloads

I am assuming that this is because of interpolation of normals. As I say my approach to create this tetrahedron was to use only 4 vertices and let the indices do the rest. Should I actually define 4 faces (12 vertices) and define the normals for each face. I know this latter approach would work but my question is should the former approach also work, or should I reserve the low vertex count solution only for high polygon count objects?

My code for defining the vertices, indices and normal is below:

PrimitiveVertexStruct vertices[]=
{
  { D3DXVECTOR3(0.0f,1.0f,0.0f), WHITE_VECTOR, ZERO_VECTOR3 }, //1
  { D3DXVECTOR3(1.0f,-1.0f,0.0f), RED_VECTOR,  ZERO_VECTOR3 }, //2
  { D3DXVECTOR3(-1.0f,-1.0f,0.0f),BLUE_VECTOR, ZERO_VECTOR3 }, //3
  { D3DXVECTOR3(0.0f,0.0f,2.0f), BLACK_VECTOR, ZERO_VECTOR3 }, //4
};
m_numTetraVertices = sizeof(vertices) / sizeof(PrimitiveVertexStruct);

m_numTetraIndices=12;
DWORD indices[] =
	{
  0,1,3, //1,2,4
  1,2,3, //2,3,4
  2,0,3, //3,1,4
  0,2,1 //1,3,2
};
//compute average normals
for(int i=0;i<m_numTetraVertices;i++) {
  //indices of triangle
  int i0=indices[i*3+0];
  int i1=indices[i*3+1];
  int i2=indices[i*3+2];
  //vertices of triangle
  D3DXVECTOR3 v0=vertices[i0].Pos;
  D3DXVECTOR3 v1=vertices[i1].Pos;
  D3DXVECTOR3 v2=vertices[i2].Pos;
  //compute face normal
  D3DXVECTOR3 e0=v1 - v0;
  D3DXVECTOR3 e1=v2 - v0;
  D3DXVECTOR3 normal;
  D3DXVECTOR3 cross;
  D3DXVec3Cross(&normal, &e0, &e1);
  //add to any existing normals for that vertex
  vertices[i0].Normal+=normal;
  vertices[i1].Normal+=normal;
  vertices[i2].Normal+=normal;
}

Sprites, primitives and depth testing

04 July 2012 - 08:08 AM

Hi,
I am trying to draw both sprites and 2d primitives on the screen at the same time and am having problems with the depth. The primitives are created using D3D10_PRIMITIVE_TOPOLOGY_LINESTRIP.

There are two situations that I can have. Note that the rectangles have a depth of 0.5, the left hand ball of 0.2 and the two balls on the right 0.6:

1) Depth Enabled: the sprites render correctly based on the assigned depth. However the alpha channels of the sprite show up.

Attached File  DepthEnabled.png   12.2KB   28 downloads



2) Depth Disabled: the primitives and sprites do not render correctly on depth. If the sprites are the last group to be drawn the they will be on top and visa versa

Attached File  DepthDisabled.png   12.31KB   25 downloads

During initialisation have created two depth stencils:

[source lang="cpp"]//Create a disabled depth state SAFE_RELEASE(m_pDepthDisabledState); D3D10_DEPTH_STENCIL_DESC depthStencilDesc; ZeroMemory(&depthStencilDesc, sizeof(depthStencilDesc)); depthStencilDesc.DepthEnable = false; depthStencilDesc.DepthWriteMask = D3D10_DEPTH_WRITE_MASK_ALL; depthStencilDesc.DepthFunc = D3D10_COMPARISON_LESS; depthStencilDesc.StencilEnable = false; depthStencilDesc.StencilReadMask = 0xFF; depthStencilDesc.StencilWriteMask = 0xFF; depthStencilDesc.FrontFace.StencilFailOp = D3D10_STENCIL_OP_KEEP; depthStencilDesc.FrontFace.StencilDepthFailOp = D3D10_STENCIL_OP_INCR; depthStencilDesc.FrontFace.StencilPassOp = D3D10_STENCIL_OP_KEEP; depthStencilDesc.FrontFace.StencilFunc = D3D10_COMPARISON_ALWAYS; depthStencilDesc.BackFace.StencilFailOp = D3D10_STENCIL_OP_KEEP; depthStencilDesc.BackFace.StencilDepthFailOp = D3D10_STENCIL_OP_DECR; depthStencilDesc.BackFace.StencilPassOp = D3D10_STENCIL_OP_KEEP; depthStencilDesc.BackFace.StencilFunc = D3D10_COMPARISON_ALWAYS; hr = DXUTGetD3D10Device()->CreateDepthStencilState(&depthStencilDesc, &m_pDepthDisabledState); //Create an enabled depth state SAFE_RELEASE(m_pDepthEnabledState); depthStencilDesc.DepthEnable = true; //the only difference hr = DXUTGetD3D10Device()->CreateDepthStencilState(&depthStencilDesc, &m_pDepthEnabledState); //set to disabled by default. DXUTGetD3D10Device()->OMSetDepthStencilState(m_pDepthDisabledState, 0); //DXUTGetD3D10Device()->OMSetDepthStencilState(m_pDepthEnabledState, 0);[/source]
I have tried switching between depth enabled/disabled during the rendering of either the primitives or the sprites but it seems and cannot mix and match the two. Has anyone any ideas what approach I should take to solve this?

Thanks,

Ben.

PARTNERS