Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 14 Feb 2012
Offline Last Active May 20 2016 05:44 AM

Posts I've Made

In Topic: Meshes rendered with the aid of shaders corrupted in windows 64 bit

20 May 2016 - 05:49 AM

Generally D3DLOCK_DISCARD is described as below:

D3DLOCK_DISCARD indicates that the application does not need to keep the old vertex or index data in the buffer. If the graphics processor is still using the buffer when lock is called with D3DLOCK_DISCARD, a pointer to a new region of memory is returned instead of the old buffer data. This allows the graphics processor to continue using the old data while the application places data in the new buffer. No additional memory management is required in the application; the old buffer is reused or destroyed automatically when the graphics processor is finished with it. Note that locking a buffer with D3DLOCK_DISCARD always discards the entire buffer, specifying a nonzero offset or limited size field does not preserve information in unlocked areas of the buffer.

OK so D3DLOCK_DISCARD doesn't return data only a pointer to a new memory graphic card address which has to be fully initialized by me before unlock() is done.

In Topic: Meshes rendered with the aid of shaders corrupted in windows 64 bit

09 May 2016 - 12:10 PM

Problem is solved when I use 0 flag instead of D3DLOCK_DISCARD. As I use this Lock in setup before rendering scene in order to initialize mesh it is OK to me using 0 instead of D3DLOCK_DISCARD. However if anybody has an idea why D3DLOCK_DISCARD mess up my mesh please share your suspicion.

In Topic: Meshes rendered with the aid of shaders corrupted in windows 64 bit

08 May 2016 - 02:28 PM

I thought the problem is with Intel GMA graphic card that software vertex processing has to be used but I have the same issue with my PC where Nvidia GT440 graphic card is installed. Here for sure hardware vertex processing can be used. I narrowed down the issue. The issue occurs only when I do:

1. Load mesh from x file

2. do:

void* dataVB = NULL;
if(FAILED(pVB->Lock(0, 0, &dataVB, D3DLOCK_DISCARD))) 
    throw runtime_error("Goalnet::addToNetMeshVertexIndices::Lock");

/*float* data = (float *) dataVB;
for(u32 i=0; i<mNet.mesh->GetNumVertices(); ++i)
   u32 x = i;
   memcpy(&data[9*i+8], &x, 4);
   mVertexPos[i] = D3DXVECTOR3(data[9*i],data[9*i+1],data[9*i+2]);

   throw runtime_error("Goalnet::addToNetMeshVertexIndices::Unlock");

3. Render with my effect file/shaders


When step 2 is skipped my graphic is rendered as in windows 32bit. So it seems like invoking vertex buffer lock and vertex buffer unlock triggers issue. Issue isn't triggered by memcpy because I put it in comments and the problem still occurs. So it is lock/unlock. Anyone knows how to solve the issue? Maybe graphic card returns from its RAM to PC RAM values in 32 bits when doing Lock but when doing unlock it gets values as 64 bits and this makes graphic distorted? Exceptions aren\t invoked.

In Topic: How to configure step by step shader/effect file compilation for x64

08 May 2016 - 12:34 PM

Anyone knows at least some game C++ company which is user friendly and can help me? Because for sure there must exist some people on the world who knows how to solve such issues. 

In Topic: Meshes rendered with the aid of shaders corrupted in windows 64 bit

26 March 2016 - 04:31 PM

OK problem solved. The rootcause is that really for Intel graphic card I should used software vertex processing not hardware one, as graphic card doesn't support shaders.

Here is article about that:


So my code and shader is OK and there is none any bug in it.

Thanks all for hints and help.