Jump to content
  • Advertisement

562538332

Member
  • Content Count

    22
  • Joined

  • Last visited

Community Reputation

134 Neutral

About 562538332

  • Rank
    Member
  1. I wrote a very simple HLSL shader, but the matrix multiplication confuses me.   // The matrix is in column major   cbuffer Transform : register(b0) { matrix gViewMatrix; matrix gProjMatrix; };   void VS(in float3 pos : POSITION ,in float4 col : COLOR ,out float4 colour : COLOR ,out float4 posH : SV_POSITION ) { matrix vp = gProjMatrix * gViewMatrix; // The code below works posH = mul(gViewMatrix, float4(pos, 1.0)); posH = mul(gProjMatrix, posH); // The code below doesn't works //posH = mul(vp, float4(pos, 1.0)); colour = col; }     If i multiply the vector twice by mul, the result is right. But if i use the multiplication result of the matrix, the result is wrong! Why this will happen?
  2. PhysX have property for every collision object, back in my time it was called "skin width" or something like that, it is simply the additional orthogonal uptake of shape where collision of the object appears ahead, it can be positive or negative or neutral.   I had noticed that property and set it into zero for both ground shape and character shape, but it couldn't eliminate that phenomenon! I wonder whether there is something else related to that phenomenon.
  3. No I haven't set anything for it. Just simply created a TriangleMesh for it. 
  4. I tried to use Physx to impledent a Character Controller(PxController). Physx supports two Controller shapes: Box and Capule. They are defined by PxControllerDesc. Even I set the description's contactOffset to a number that is very close to 0. Both of them showed an unexpected result: [attachment=29473:box.JPG] [attachment=29472:Capsule.JPG] The ground was on the plane whose high was 0. I wondered if the character was actually hovered from the ground. So I used getFootPosition() to get the foot position. The result is unexpectedly 0. But from the image the result should be a result high than 0. Why did the physx show that strange result? And how to solve this problem?
  5. Does the Havok Engine or Bullet Engine also have that restriction? 
  6. I want to attach a PxBoxGeometry to the PxRigidDynamic as a simulation data because Physx doesn't allow TriangleMesh being simulated in a simulated PxRigidDynamic. So I create a PxShape with a PxBoxGeometry like this:   vec3 halfSize = boundingBox.GetHalfSize(); PxBoxGeometry boxGeometry = PxBoxGeometry(halfSize.x, halfSize.y, halfSize.z); PxShape* boxShape = physics->createShape(boxGeometry, *material); body->attachShape(*boxShape);   After that I found the boxShape got bias from the original. In order to prove that, I also created a ConvexMesh using the box's vertices:   convexDesc.points.count = 8; convexDesc.points.stride = sizeof(vec3); convexDesc.points.data = boundingBox.GetAllCorners(); convexDesc.flags = PxConvexFlag::eCOMPUTE_CONVEX;   PxDefaultMemoryOutputStream  stream; if(cooking->cookConvexMesh(convexDesc, stream)) { PxDefaultMemoryInputData input(stream.getData(), stream.getSize()); PxConvexMesh* convexMesh = physics->createConvexMesh(input); PxConvexMeshGeometry convexGeomtry = PxConvexMeshGeometry(convexMesh); PxShape *convexShape = physics->createShape(convexGeomtry, *material); body->attachShape(*convexShape); }   The code above worked very well. The Convex really surrounded the original boundingbox(It actually was the original boundingbox).   That is how the two bounding box get different position: [attachment=29445:box.JPG]   Note that for both the PxBoxGeometry  Shape and the PxConvexMeshGeometry Shape I haven't set localPosition beacuse it's not necessary.   That's is very confusing! After that I also created a PxSphereGeometry and found it also got bias from original. What is the reason?
  7. In physx, PxTriangleMeshGeometry that are attached to a simulated PxRigidDynamic  must not be SIMULATION_SHAPE. In other word, you can't set the a simulated PxRigidDynamic with a SIMULATION_SHAPE TriangleMesh.  That is very annoying! If i want a TriangleMesh to be SIMULATION_SHAPE,  that the PxRigidDynamic is attached to must be KINEMATIC,  and the Physx would not simulate the RigidActor for me. I want the PxRigidDynamic with TriangleMesh to be SIMULATE, NOT KINEMATIC, and TriangleMesh is also being used into collision. HOW TO SOLVE IT.  
  8. The problem is this: I have a rendering vertex set which will get a right result for rendering. But when i try to use Physx to Cook those vertices into a PxTriangleMesh it will always get unexpected result. I wonder if is those vertices are too tough for Physx to cook into a right result. I have tried many way to solve it including merging duplicated vertices but little improvement got. This is the Cooking Paramters i set: PxCookingParams params(scale); params.meshWeldTolerance = 0.00001f; params.meshPreprocessParams = PxMeshPreprocessingFlags(PxMeshPreprocessingFlag::eWELD_VERTICES | PxMeshPreprocessingFlag::eREMOVE_UNREFERENCED_VERTICES | PxMeshPreprocessingFlag::eREMOVE_DUPLICATED_TRIANGLES); Is this is a commond problem or i just forgot something to set? How to solve this problem?
  9. In OpenGL I can explicitly decide a GLSL sampler to sample the texture in a particular texture unit (glTexture0 and so on) by glUniform1i(**,0) But In D3D I can't do it? Or can I do it in HLSL Code? As you can see: OpenGL and D3D do it in an opposite way.
  10. Well.In OpenGL I can use glUniform to bind a sampler into a location. But I cant find an interface in D3D9 to complete the same mission. ID3DXConstantTable doesn't give that interface. How to Sovle it!  
  11. I am a beginer in HLSL and i am writting a very simple program but an error occors;   CODE:   struct VS_INPUT {  float4 position:POSITION;  float2 uv0:TEXC00RD0;  float2 uv1:TEXC00RD1;  float2 uv2:TEXC00RD2; };   struct PS_INPUT {  float4 position:POSITION;  float2 uv0:TEXCOORD0;  float2 uv1:TEXCOORD1;  float2 uv2:TEXCOORD2; };     PS_INPUT Main(VS_INPUT input) {   PS_INPUT output=(PS_INPUT)0;     output.position=input.position;   output.uv0=input.uv0;   output.uv1=input.uv1;   output.uv2=input.uv2;   return output; } [attachment=27370:??.JPG] What is the wrong with the Code?  
  12. Come on.That is not gl_Normal....
  13. I have created a program and attach a vertex shader and a fragment shader. Before  linking the program,I use glBindAttribLocationARB the bind the attribue location  and the i use glLinkProgramARB to link the program. Linking is OK.There is still no error. But when i use glGetAttribLocationARB to get the location. The return result is wrong!!   The attribue"vertex" return 0 but the "normal" return -1 which i binded into 0 and 2   Here is the shader very simple: vertex shader #version 330 core uniform mat4 MV; uniform mat4 Proj; in vec3 normal; in vec3 vertex;     void main() {  gl_Position=Proj*MV*vec4(vertex,1.0); }   fragment shader #version 330 core void main() {    } If anyone know the reason,please tell me!!
  14. 562538332

    Question about calculating Frustum plane

    Thanks.The article here really explains the reason.
  15. I am now reading codes in the OGRE Engine. In the Frustum part,there is function to calculating the frustum plane which is very confuing.   void Frustum::updateFrustumPlanesImpl(void) const { // ------------------------- // Update the frustum planes // ------------------------- Matrix4 combo = mProjMatrix * mViewMatrix;   mFrustumPlanes[FRUSTUM_PLANE_LEFT].normal.x = combo[3][0] + combo[0][0]; mFrustumPlanes[FRUSTUM_PLANE_LEFT].normal.y = combo[3][1] + combo[0][1]; mFrustumPlanes[FRUSTUM_PLANE_LEFT].normal.z = combo[3][2] + combo[0][2]; mFrustumPlanes[FRUSTUM_PLANE_LEFT].d = combo[3][3] + combo[0][3];   mFrustumPlanes[FRUSTUM_PLANE_RIGHT].normal.x = combo[3][0] - combo[0][0]; mFrustumPlanes[FRUSTUM_PLANE_RIGHT].normal.y = combo[3][1] - combo[0][1]; mFrustumPlanes[FRUSTUM_PLANE_RIGHT].normal.z = combo[3][2] - combo[0][2]; mFrustumPlanes[FRUSTUM_PLANE_RIGHT].d = combo[3][3] - combo[0][3];   mFrustumPlanes[FRUSTUM_PLANE_TOP].normal.x = combo[3][0] - combo[1][0]; mFrustumPlanes[FRUSTUM_PLANE_TOP].normal.y = combo[3][1] - combo[1][1]; mFrustumPlanes[FRUSTUM_PLANE_TOP].normal.z = combo[3][2] - combo[1][2]; mFrustumPlanes[FRUSTUM_PLANE_TOP].d = combo[3][3] - combo[1][3];   mFrustumPlanes[FRUSTUM_PLANE_BOTTOM].normal.x = combo[3][0] + combo[1][0]; mFrustumPlanes[FRUSTUM_PLANE_BOTTOM].normal.y = combo[3][1] + combo[1][1]; mFrustumPlanes[FRUSTUM_PLANE_BOTTOM].normal.z = combo[3][2] + combo[1][2]; mFrustumPlanes[FRUSTUM_PLANE_BOTTOM].d = combo[3][3] + combo[1][3];   mFrustumPlanes[FRUSTUM_PLANE_NEAR].normal.x = combo[3][0] + combo[2][0]; mFrustumPlanes[FRUSTUM_PLANE_NEAR].normal.y = combo[3][1] + combo[2][1]; mFrustumPlanes[FRUSTUM_PLANE_NEAR].normal.z = combo[3][2] + combo[2][2]; mFrustumPlanes[FRUSTUM_PLANE_NEAR].d = combo[3][3] + combo[2][3];   mFrustumPlanes[FRUSTUM_PLANE_FAR].normal.x = combo[3][0] - combo[2][0]; mFrustumPlanes[FRUSTUM_PLANE_FAR].normal.y = combo[3][1] - combo[2][1]; mFrustumPlanes[FRUSTUM_PLANE_FAR].normal.z = combo[3][2] - combo[2][2]; mFrustumPlanes[FRUSTUM_PLANE_FAR].d = combo[3][3] - combo[2][3];   // Renormalise any normals which were not unit length for(int i=0; i<6; i++ )  { Real length = mFrustumPlanes.normal.normalise(); mFrustumPlanes.d /= length; }   mRecalcFrustumPlanes = false; }   The Matrix is Row Major,and the Vector is Column Major.   As you can see. The code calculates the plane without calculating  the Inverse of the (ViewProjection Matrix) How can he do that?  
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!