Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

512 Good

About Bojanovski

  • Rank

Personal Information


  • Twitter
  • Github

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Bojanovski

    Capsule Inertia Tensor

    Although there are robust algorithms that calculate body mass, center of mass and inertia tensor based on input triangle mesh, there are several reasons why alternatives should be used when possible. Probably the first one is speed. When working with triangles we end up with O(n) algorithm, as we need to process every of n triangles. Calculating mathematically defined body such as capsule gets us O(1) algorithm. Secondly, often we want to create a rigid body for our physics simulation but have only one mesh, a very detailed one for rendering. In such cases it is inconvenient to construct a special mesh for physics calculations and would be much easier to pass a few arguments to a function for a specific geometric primitive. Last and not so significant (at least for games) is accuracy. It is reasonable to say that a mesh that consists of triangles will never truly have round parts of its surface. This goes for spheres, cylinders, capsules and many other primitives. So if we pass our capsule as a mesh we will end up with slightly incorrect results. The analytical method that will be presented here provides accurate results. It is assumed that the reader has insight in theory behind inertia tensors and/or has an existing physics system that can make use of such a matrix. Knowledge in multivariate calculus is also assumed. Concept Firstly, what is meant by saying capsule? Well, simply put, it is a cylinder that (instead of flat ends) has hemispherical ends. By cylinder I mean a circular one (not elliptical). Hemispheres have the same radius. It should also be mentioned that the given body is solid (not hollow). Figure 1. Capsule definition. Where H is height of the cylinder part and R is radius of the hemisphere caps. This is the coordinate system that has its origin in the center of the mass of the capsule and whose axes are the axes which we will be calculating moments of inertia for. Figure 2. Capsule decomposition. The approach that we will be using to calculate the inertia tensor can be seen from Figure 2. The body will be decomposed into three parts: upper hemisphere, lower hemisphere and cylinder. Inertia tensors will be calculated for each of them and then just summed together to result in a complete capsule tensor. Here are given, in Cartesian coordinates, the well-known equations for moments and products of inertia needed for the inertia tensor in general. (eq. 1) V represents the three dimensional region of integration and dm is an infinitesimal amount of mass at some point in our capsule. The integrand is simply squared distance from the axis. (eq. 2) Note that in eq. 2 products of inertia are equal to zero. There exists a theorem that says that it is possible to find three (mutually perpendicular) axes for every body so the tensor produced from those axes has zero products of inertia. Those axes are called the principal axes. Furthermore, for bodies with constant density an axis of rotational symmetry is a principal axis. We have three perpendicular axes of rotational symmetry for our capsule. First one is the y axis of continuous rotational symmetry and then there are x and z axes of discrete rotational symmetry. I will not go into detail about what axes of rotational symmetry are, but I recommend to go and check it out. (eq. 3) Since we will integrate over space eq. 3 will be used. If density is required to be some other constant than 1, the end result can simply be multiplied by that constant. Cylinder Before we dive into integration here is the Cartesian coordiante system we will be using for cylinder: Figure 3. Cylinder and its position relative to coordinate system. Let's start with easiest to compute moment of inertia, the one around the y axis. As previously mentioned, integrand is just a squared distance from the axis. With that in mind we can leave the Cartesian coordinate system and use the cylindrical one. The y axis remains unchanged. Also note that solutions will be written immediately without showing any steps. (eq. 4) First we add up (integrate) over 2*pi*r interval to get the moment of inertia of a circular curve around the y axis. After that we integrate that curve over the interval R to get the moment of inertia of a disc surface perpendicular to the y axis. Lastly those discs are added up across [-H/2, H/2] interval to get the moment of inertia of a whole cylinder. The integral for the mass of a cylinder is very similar and follows the same intuition. It is given by: (eq. 5) Now, slightly more complicated integrals are the ones for moments of inertia around x and z axis. Those are equal, so only one calculation is needed. Note that we are back in Cartesian coordinates for this one and the integral is set up like we are calculating Ixx. (eq. 6) This is it for the cylinder as I will not go into detail about intuition for this integral since there is more work to be done for hemisphere caps. Hemisphere caps The following is how the hemisphere is defined relative to Cartesian coordinate system for the purpose of integration: Figure 4. Hemisphere and its position relative to coordinate system. The easiest to calculate moment of inertia of hemispheres is also the one around y axis. (eq. 7) We are in the cylindrical coordinate system and the intuition is similar to eq. 4. The only difference is that now the radius of a disc we need to integrate is a variable of the y position of a disc (sqrt(R^2 - y^2)). Then we just integrate those discs from 0 to R and we end up with the whole hemisphere region. As for the cylinder, the integral for the mass of a hemisphere follows the same intuition and is given by: (eq. 8) Now, when we try to calculate the moments of inertia around x and z axis (which are equal), things get tricky. Note that integral is set up like we are calculating Ixx: (eq. 9) Figure 5. Center of mass of a hemisphere. As it can be seen in figure 5, center of mass is not in the origin of the coordinate system, but the axis for which we calculated the moment of inertia is. In order to work with this moment of inertia (to apply Steiner's rule) we need it to be in the center of mass. This is where we calculate the distance b and use it in reverse Steiner's rule: (eq. 10) (eq. 11) Where Icm is moment of inertia of an axis that goes through the center of mass and m is hemisphere mass from eq. 8. Now it's finally possible to calculate Ixx and Izz for our hemisphere caps. We can now apply Steiner's rule to translate the axes from the center of mass of a hemisphere to the center of mass of the whole body (capsule): (eq. 12) Moments of inertia for the hemisphere cap on the bottom of the capsule do not differ to those here. The tensor (eq. 13) Equations in eq. 13 give us separated masses of cylinder and hemisphere as well as the mass of the whole body. Notice how mass of the hemisphere (mhs) is multiplied by two. This is, of course, because there are two hemispheres on both ends of the capsule. Also, note our previous assumption in which density is equal to one unit of measurement, which is why the mass is equal to volume. If needed mcy and mhs can simply be multiplied by some constant density and the results will be accurate. The given code is written in such way. The full tensor uses this values and is given by: (eq. 14) Although previously shown formulas for moments of inertia are not given by mcy and mhs, by substituting those values equality can be easily shown. The parts that represent moments of inertia of a hemisphere are multiplied by two, because we have two of them. Code The following code implementation is given in C-like language and is fairly optimized. #define PI 3.141592654f #define PI_TIMES2 6.283185307f const float oneDiv3 = (float)(1.0 / 3.0); const float oneDiv8 = (float)(1.0 / 8.0); const float oneDiv12 = (float)(1.0 / 12.0); void ComputeRigidBodyProperties_Capsule(float capsuleHeight, float capsuleRadius, float density, float &mass, float3 ¢erOfMass, float3x3 &inertia) { float cM; // cylinder mass float hsM; // mass of hemispheres float rSq = capsuleRadius*capsuleRadius; cM = PI*capsuleHeight*rSq*density; hsM = PI_TIMES2*oneDiv3*rSq*capsuleRadius*density; // from cylinder inertia._22 = rSq*cM*0.5f; inertia._11 = inertia._33 = inertia._22*0.5f + cM*capsuleHeight*capsuleHeight*oneDiv12; // from hemispheres float temp0 = hsM*2.0f*rSq / 5.0f; inertia._22 += temp0 * 2.0f; float temp1 = capsuleHeight*0.5f; float temp2 = temp0 + hsM*(temp1*temp1 + 3.0f*oneDiv8*capsuleHeight*capsuleRadius); inertia._11 += temp2 * 2.0f; inertia._33 += temp2 * 2.0f; inertia._12 = inertia._13 = inertia._21 = inertia._23 = inertia._31 = inertia._32 = 0.0f; mass = cM + hsM * 2.0f; centerOfMass = {0.0f, 0.0f, 0.0f}; } It would seem that centerOfMass is useless, but I have left it here because it is important part of rigid body properties and often a requirement for physics engines. Conclusion Capsules defined in a way which was presented here are very commonly used in games and physical simulations, so this tensor should be of use to anyone building their own 3D game engine or similar software. Although, right-handed Cartesian coordinate system was used in this article, you can also use the code in left-handed system as long as y axis is the longitudinal axis (the axis that goes through caps). The somewhat tedious processes of integration were skipped and only the end results were shown. This is mostly because it would take too much space. If you are curious whether those expressions are correct, you can try them on Wolfram or try to integrate them yourself. Also, you can follow me on Twitter. Article Update Log December 1, 2014: Initial release December 3, 2014: Updated figures, equations and text
  2. Bojanovski

    Capsule Inertia Tensor

    Whoa, that's a lot of work you gave me, but your notes are sound and I did my best to update the article. If there is anything else, let me know.   Thanks, Bojan
  3. Bojanovski

    Capsule Inertia Tensor

    Thanks for pointing cm (center of mass) out. The new version of the article has that part a lot clearer now. As for your question about density, the answer is yes, it is as simple as multiplying the masses. There is nothing more to it as long as it is constant. The new version of the article has that part also explained better.   Thank you, Bojan
  4. To achieve high visual fidelity in cloth simulation often requires the use of large amounts of springs and particles which can have devastating effects on performance in real time. Luckily there exists a workaround using B-splines and hardware tessellation. I have, in my approach, used a grid of only 10x10 control points to produce believable and fast results. CPU is tasked only with collision detection and integration of control points which is computed fast considering that there are only 100 of those and that it can easily run on a separate thread. All of B-spline computation is done on GPU or, to be more specific, in the domain shader of the tessellation pipeline. Since GPU code is the only part of this implementation involving B-spline computation, all code that follows is written in HLSL. B-Splines I will not go into details about B-splines but some basics should be mentioned. So here is the spline definition (note that there are actually n+1 control points, not n) (eq. 1) N - basis functions B - control points d - degree u - input parameter The degree of the curve d is a natural number that must be so that 1 <= d <= n. Degree + 1 is equal to a number of control points influencing the shape of a curve segment. Meaning if we would choose d to be 1 that number would be 2 and we would end up with linear interpolation between two control points. Experimenting with different values for d I have found that the best balance between performance and visual quality is when d=2. Figure 1. B-spline curve The extension from curves to surfaces is simple and is given by (eq. 2) Basis function values for each, u and v parameter, are computed separately and usually stored in some array before they are used in this formula. These function values are used to determine how much every control point is influencing the end result for certain input parameter u (or u and v for surfaces). The definition for basis functions are where things get tricky. (eq. 3) This recursion ends when d reaches zero: Where is a knot in a knot vector - a nondecreasing sequence of scalars. Open (nonperiodic) and uniformly spaced knot vector is used here. Also for every i. As stated before, n + 1 is the actual number of control points (according to definition), but this is inconvenient for our code. This is why, in code, the denominator is subtracted by 1. Here is the definition and code implementation: float GetKnot(int i, int n) { // Calcuate a knot form an open uniform knot vector return saturate((float)(i - D) / (float)(n - D)); } This recursive dependencies can be put into table and it can easily be seen that for a given u only one basis function N in the bottom row is not zero. Figure 2. Recursive dependencies of a basis functions (n=4, d=2). For a certain u the only non zero values are in the rectangles. This is the motivation behind De Boor's algorithm which optimizes the one based on mathematical definition. Further optimization is also possible like the one from the book by David H. Eberly. I have modified David's algorithm slightly so that it can run on the GPU and added some code for normal and tangent computation. So how do we get the right i so that for given u? Well considering that knot vector is uniformly spaced it can be easily calculated. One thing to note though, since u when u=1 we will end up with incorrect values. This can be fixed by making sure a vertex with a texture coordinate (either u or v) equal to 1 is never processed, which is inconvenient. The simpler way is used here, as we simply multiply the u parameter by a number that is "almost" 1. int GetKey(float u, int n) { return D + (int)floor((n - D) * u*0.9999f); } The last thing we need before any of this code will work is our constant buffer and pre-processor definitions. Although control points array is allocated to its maximum size, smaller sizes are possible by passing values to gNU and gNV. Variable gCenter will be discussed later, but besides that all variables should be familiar by now. #define MAX_N 10 // maximum number of control points in either direction (U or V) #define D 2 // degree of the curve #define EPSILON 0.00002f // used for normal and tangent calculation cbuffer cbPerObject { // B-Spline int gNU; // gNU actual number of control points in U direction int gNV; // gNV actual number of control points in V direction float4 gCP[MAX_N * MAX_N]; // control points float3 gCenter; // arithmetic mean of control points // ... other variables }; The function tasked with computing B-spline inputs just texture coordinates (u and v) and uses it to compute position, normal and tangent. We will add the Coordinates u and v a small epsilon value to produce an offset in coordinate space. These new values are named u_pdu and v_pdv in code and here is how they are used to produce tangent and normal: (eq. 4) Now, as mentioned earlier, basis function values are computed and stored in separate arrays for u and v parameter, but since we have additional two parameters u_pdu and v_pdv a total of four basis functions (arrays) will be needed. These are named basisU, basisV, basisU_pdu, basisV_pdv in code. GetKey() function is also used here to calculate the i so that for given u as stated before and separately one i for given v. One might think that we also need separate i for u_pdu and v_pdv. That would be correct according to definition, but the inaccuracy we get from u_pdu and v_pdv potentially not having the correct i and thus having inacurate basis function values array is too small to take into account. void ComputePosNormalTangent(in float2 texCoord, out float3 pos, out float3 normal, out float3 tan) { float u = texCoord.x; float v = texCoord.y; float u_pdu = texCoord.x + EPSILON; float v_pdv = texCoord.y + EPSILON; int iU = GetKey(u, gNU); int iV = GetKey(v, gNV); // create and set basis float basisU[D + 1][MAX_N + D]; float basisV[D + 1][MAX_N + D]; float basisU_pdu[D + 1][MAX_N + D]; float basisV_pdv[D + 1][MAX_N + D]; basisU[0][iU] = basisV[0][iV] = basisU_pdu[0][iU] = basisV_pdv[0][iV] = 1.0f; // ... the rest of the function code Now for the actual basis function computation. If you look at figure 2. you can see that non zero values form a triangle. Values of the left diagonal and right vertical edge are computed first since each value depends only on one previous value. The interior values are then computed using eq. 3. Every remaining value of the basis functions array is simply left untouched. Their value is zero but even if it would be some unwanted value it doesn't matter as will be seen later. // ... the rest of the function code // evaluate triangle edges [unroll] for (int j = 1; j <= D; ++j) { float gKI; float gKI1; float gKIJ; float gKIJ1; // U gKI = GetKnot(iU, gNU); gKI1 = GetKnot(iU + 1, gNU); gKIJ = GetKnot(iU + j, gNU); gKIJ1 = GetKnot(iU - j + 1, gNU); float c0U = (u - gKI) / (gKIJ - gKI); float c1U = (gKI1 - u) / (gKI1 - gKIJ1); basisU[j][iU] = c0U * basisU[j - 1][iU]; basisU[j][iU - j] = c1U * basisU[j - 1][iU - j + 1]; float c0U_pdu = (u_pdu - gKI) / (gKIJ - gKI); float c1U_pdu = (gKI1 - u_pdu) / (gKI1 - gKIJ1); basisU_pdu[j][iU] = c0U_pdu * basisU_pdu[j - 1][iU]; basisU_pdu[j][iU - j] = c1U_pdu * basisU_pdu[j - 1][iU - j + 1]; // V gKI = GetKnot(iV, gNV); gKI1 = GetKnot(iV + 1, gNV); gKIJ = GetKnot(iV + j, gNV); gKIJ1 = GetKnot(iV - j + 1, gNV); float c0V = (v - gKI) / (gKIJ - gKI); float c1V = (gKI1 - v) / (gKI1 - gKIJ1); basisV[j][iV] = c0V * basisV[j - 1][iV]; basisV[j][iV - j] = c1V * basisV[j - 1][iV - j + 1]; float c0V_pdv = (v_pdv - gKI) / (gKIJ - gKI); float c1V_pdv = (gKI1 - v_pdv) / (gKI1 - gKIJ1); basisV_pdv[j][iV] = c0V_pdv * basisV_pdv[j - 1][iV]; basisV_pdv[j][iV - j] = c1V_pdv * basisV_pdv[j - 1][iV - j + 1]; } // evaluate triangle interior [unroll] for (j = 2; j <= D; ++j) { // U [unroll(j - 1)] for (int k = iU - j + 1; k < iU; ++k) { float gKK = GetKnot(k, gNU); float gKK1 = GetKnot(k + 1, gNU); float gKKJ = GetKnot(k + j, gNU); float gKKJ1 = GetKnot(k + j + 1, gNU); float c0U = (u - gKK) / (gKKJ - gKK); float c1U = (gKKJ1 - u) / (gKKJ1 - gKK1); basisU[j][k] = c0U * basisU[j - 1][k] + c1U * basisU[j - 1][k + 1]; float c0U_pdu = (u_pdu - gKK) / (gKKJ - gKK); float c1U_pdu = (gKKJ1 - u_pdu) / (gKKJ1 - gKK1); basisU_pdu[j][k] = c0U_pdu * basisU_pdu[j - 1][k] + c1U_pdu * basisU_pdu[j - 1][k + 1]; } // V [unroll(j - 1)] for (k = iV - j + 1; k < iV; ++k) { float gKK = GetKnot(k, gNV); float gKK1 = GetKnot(k + 1, gNV); float gKKJ = GetKnot(k + j, gNV); float gKKJ1 = GetKnot(k + j + 1, gNV); float c0V = (v - gKK) / (gKKJ - gKK); float c1V = (gKKJ1 - v) / (gKKJ1 - gKK1); basisV[j][k] = c0V * basisV[j - 1][k] + c1V * basisV[j - 1][k + 1]; float c0V_pdv = (v_pdv - gKK) / (gKKJ - gKK); float c1V_pdv = (gKKJ1 - v_pdv) / (gKKJ1 - gKK1); basisV_pdv[j][k] = c0V_pdv * basisV_pdv[j - 1][k] + c1V_pdv * basisV_pdv[j - 1][k + 1]; } } // ... the rest of the function code And finally, with basis function values computed and saved in arrays we are finally ready to use eq. 1. But before that there is one particular thing that should be discussed here. If you know how float numbers work (IEEE 754) then you know that if you add a very small number (like our EPSILON) and a very big one you could end up losing data. This is exactly what happens if control points are relatively far from world's coordinate system origin and vectors like pos_pdu and pos that should be different by a small amount end up being equal. To prevent this all control points are translated more towards center with the gCenter variable. This variable is a simple arithmetic mean of all the control points. // ... the rest of the function code float3 pos_pdu, pos_pdv; pos.x = pos_pdu.x = pos_pdv.x = 0.0f; pos.y = pos_pdu.y = pos_pdv.y = 0.0f; pos.z = pos_pdu.z = pos_pdv.z = 0.0f; // [unroll(D + 1)] for (int jU = iU - D; jU <= iU; ++jU) { // [unroll(D + 1)] for (int jV = iV - D; jV <= iV; ++jV) { pos += basisU[D][jU] * basisV[D][jV] * (gCP[jU + jV * gNU].xyz - gCenter); pos_pdu += basisU_pdu[D][jU] * basisV[D][jV] * (gCP[jU + jV * gNU].xyz - gCenter); pos_pdv += basisU[D][jU] * basisV_pdv[D][jV] * (gCP[jU + jV * gNU].xyz - gCenter); } } tan = normalize(pos_pdu - pos); float3 bTan = normalize(pos_pdv - pos); normal = normalize(cross(tan, bTan)); pos += gCenter; } Hardware tessellation and geometry shader Still with me? Awesome, since it is mostly downhill from this point. It was probably easy to guess that a mesh in form of a grid will be needed here, made so that its texture coordinates stretch from 0 to 1. An algorithm for this should be easy to implement and I will leave it out. It is useless to have position values of vertices in the vertex structure since that data is passed through control points (gCP). This is what vertex input structure and vertex shader needs to be like: struct V_TexCoord { float2 TexCoord : TEXCOORD; }; V_TexCoord VS(V_TexCoord vin) { // Just a pass through shader V_TexCoord vout; vout.TexCoord = vin.TexCoord; return vout; } The tessellation stages start with a hull shader. Tessellation factors are calculated in constant hull shader ConstantHS() while control hull shader HS() is again, like VS(), a passthrough shader. Although at first I experimented and created per-triangle tessellation, it was cleaner, faster and easier to implement per-object tessellation, so that approach is presented here. struct PatchTess { float EdgeTess[3] : SV_TessFactor; float InsideTess : SV_InsideTessFactor; }; PatchTess ConstantHS(InputPatch patch, uint patchID : SV_PrimitiveID) { PatchTess pt; // Uniformly tessellate the patch. float tess = CalcTessFactor(gCenter); pt.EdgeTess[0] = tess; pt.EdgeTess[1] = tess; pt.EdgeTess[2] = tess; pt.InsideTess = tess; return pt; } [domain("tri")] [partitioning("fractional_odd")] [outputtopology("triangle_cw")] [outputcontrolpoints(3)] [patchconstantfunc("ConstantHS")] [maxtessfactor(64.0f)] V_TexCoord HS(InputPatch p, uint i : SV_OutputControlPointID, uint patchId : SV_PrimitiveID) { // Just a pass through shader V_TexCoord hout; hout.TexCoord = p.TexCoord; return hout; } Also a method for calculating tessellation and a supplement for our constant buffer cbPerObject. Since control points are already in world space, only view and projection matrices are needed and here they are supplied already multiplied as gViewProj. Variable gEyePosW is a simple camera position vector and all variables under the "Tessellation" comment should be self-explanatory. CalcTessFactor() gets us the required tessellation factor used in HS() by a distance based function. You can alter the way this factor changes with distance by setting different exponents of the base s. cbuffer cbPerObject { // ... other variables // Camera float4x4 gViewProj; float3 gEyePosW; // Tessellation float gMaxTessDistance; float gMinTessDistance; float gMinTessFactor; float gMaxTessFactor; }; float CalcTessFactor(float3 p) { float d = distance(p, gEyePosW); float s = saturate((d - gMinTessDistance) / (gMaxTessDistance - gMinTessDistance)); return lerp(gMinTessFactor, gMaxTessFactor, pow(s, 1.5f)); } Now in domain shader we get to use all that B-spline goodness. New structure is also given, as here we introduce position, normals and tangents. Barycentric interpolation is used to acquire the texture coordinates from a generated vertex and then used as u and v parameters for our ComputePosNormalTangent() function. struct V_PosW_NormalW_TanW_TexCoord { float3 PosW : POSTION; float3 NormalW : NORMAL; float3 TanW : TANGENT; float2 TexCoord : TEXCOORD; }; [domain("tri")] V_PosW_NormalW_TanW_TexCoord DS(PatchTess patchTess, float3 bary : SV_DomainLocation, const OutputPatch tri) { float2 texCoord = bary.x*tri[0].TexCoord + bary.y*tri[1].TexCoord + bary.z*tri[2].TexCoord; V_PosW_NormalW_TanW_TexCoord dout; ComputePosNormalTangent(texCoord, dout.PosW, dout.NormalW, dout.TanW); dout.TexCoord = texCoord; return dout; } And now the final part before passing vertex data to pixel shader. The geometry shader. Why? Well because cloth is visible from both sides, DUH! Triangles in DirectX graphics pipeline are not however and even if we disable backface culling, the normals would still have opposite values on the back face of a triangle. This is where GS() comes in. We input three vertices at once from DS() (one triangle) and copy them to the output stream. Additionally, three more vertices are added with only difference in normal. Another thing worth mentioning is that PosW is transformed to homogeneous clip space (projection space) and saved to PosH. This is the reason for this new structure: struct V_PosH_NormalW_TanW_TexCoord { float4 PosH : SV_POSITION; float3 NormalW : NORMAL; float3 TanW : TANGENT; float2 TexCoord : TEXCOORD; }; [maxvertexcount(6)] void GS(triangle V_PosW_NormalW_TanW_TexCoord gin[3], inout TriangleStream triStream) { V_PosH_NormalW_TanW_TexCoord gout[6]; [unroll] // just copy pasti'n for (int i = 0; i < 3; ++i) { float3 posW = gin.PosW; gout.PosH = mul(float4(posW, 1.0f), gViewProj); gout.NormalW = gin.NormalW; gout.TanW = gin.TanW; gout.TexCoord = gin.TexCoord; } [unroll] // create the other side for (i = 3; i < 6; ++i) { float3 posW = gin[i-3].PosW; gout.PosH = mul(float4(posW, 1.0f), gViewProj); gout.NormalW = -gin[i-3].NormalW; gout.TanW = gin[i-3].TanW; gout.TexCoord = gin[i-3].TexCoord; } triStream.Append(gout[0]); triStream.Append(gout[1]); triStream.Append(gout[2]); triStream.RestartStrip(); triStream.Append(gout[3]); triStream.Append(gout[5]); triStream.Append(gout[4]); } I will leave it to the reader to decide what to do in pixel shader. With normals, tangents and texture coordinates there should be everything needed to create all kinds of visual magic. Good luck! float4 PS(V_PosH_NormalW_TanW_TexCoord pin) : SV_Target { // ... now what?! XD } technique11 BSplineDraw { pass P0 { SetVertexShader(CompileShader(vs_5_0, VS())); SetHullShader(CompileShader(hs_5_0, HS())); SetDomainShader(CompileShader(ds_5_0, DS())); SetGeometryShader(CompileShader(gs_5_0, GS())); SetPixelShader(CompileShader(ps_5_0, PS())); } } Conclusion Although this is not a complete shader and the work being done on the CPU is not covered at all, I think this article will give a good start to anybody wanting to have fast and pleasant looking cloth simulation in their engines. Here is a file that contains all the written code in orderly fashion and should be free from bugs and errors. Also, you can follow me on Twitter and check out this which demonstrates explained methods in action. Hope you like it!
  5. Bojanovski

    DX11 warning C4005

    I skimmed through directxtk a little but I could not find functions for loading .fxo file. It only says it offers built-in shaders to create functionality equivalent of BasicEffect from XNA. Also, I am hearing that directx 11.1 handles effects differently. What are the future plans from microsoft, will directx 11.2 in win 8.1 have this things fixed?     EDIT:   This tutorial from Frank D. Luna shows how to use hlsl (compile, load, send parameters such as constant buffers, SRVs, etc.) without using deprecated Effects11 library.   He also said: "In Metro applications, you cannot link with D3DCompiler.lib. Metro only allows certain APIs to be used for security (this is also why D3DX library and Effects cannot be used).", what answers one of my previous questions.   So I decided to keep using effects11.lib. As for the warning, I created this header which I call when I need directx headers: #ifndef DIRECTX11HEADERS_H #define DIRECTX11HEADERS_H #pragma warning( disable : 4005 ) #include <d3d11.h> #include <d3dx11.h> #include <d3dx11effect.h> #pragma warning( default : 4005 ) #endif
  6. Bojanovski

    DX11 warning C4005

    Hi!   So I've been trying to do this for far too long, but simply every option leads me to a dead end. I want to use DirectX 11 in visual studio 2012 without getting that damn warning C4005. My project is dependant on d3dx11 library so it has to be dx11, not dx11.1, unless I find some solution to manage my textures and effects loading without d3dx11!   I know what happened to windows 8 sdk and how it now includes directx 11.1 (except d3dx library) and when  I include my june 2010 version it all gets mixed up so bunch of redefinitions pops up.   So what is the easiest way to prevent visual studio from looking for directx headers in windows 8 sdk?   Also, can somebody, that has better understanding in microsoft's way of doing things, explain to me why did they create directx sdk without built-in libraries and functions for an important task such as textures and effects loading?? :S :'(
  7. So, if anyone has some recommendations, that would be great.   This is the best I found so far: 'Game Physics' by David H. Eberly.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!