Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


AzraelilMeraz

Member Since 26 Sep 2008
Offline Last Active Aug 09 2012 12:05 AM

Topics I've Started

[SOLVED] Hair simulation. Verlet/Constraints

01 August 2012 - 01:51 PM

EDIT2: SOLVED, there is nothing wrong with the algorithm - I accidentally ran the constraint generation kernel every frame

Hello!
I've been toying with OpenGL and OpenCL trying to simulate Hair on the GPU.
However I've run into issues that are not OpenGL/OpenCL specific - I seem to be incapable of writing a correct solving algorithm for distance constraints.
I'm not sure where the issue is - maybe I'm generating the constraints incorrectly, maybe it's the solving.
That's how I'm generating the constraints:
[source lang="cpp"]// create constraints from original positions__kernel void buildConstraints(__global const float4* in, __global DistanceConstraint* out, const int verticesPerHair){ int id = get_global_id(0); if((id % verticesPerHair) != 0) { out[id].id1 = id-1; out[id].id2 = id; out[id].d = length(in[id-1]-in[id]); } else { out[id].id1 = id; out[id].id2 = id; out[id].d = 0; }}[/source]

(id % verticesPerHair) != 0 means that the hair is a root. All but the root is constrained to the preceding vertex in the hair. The root is constrained to itself so it won't move

Prior to constraining the vertices the verlet integration is computed on every vertex except the root:

[source lang="cpp"]// verletif(idl != 0){ float4 difference = localpos[idl] - oldpos[idx]; oldpos[idx] = localpos[idl]; localpos[idl] += (1-friction)*difference + force*pdt*pdt;}[/source]

Then to satisfy the constraints I run fixed number of iterations of this code:

[source lang="cpp"]barrier(CLK_LOCAL_MEM_FENCE);DistanceConstraint constraint = constraints[idx];for(int i = 0; i < dcIter; i++){ if((idl%2) == 1) { distanceConstraint(localpos, constraint.id1 % verticesPerHair, constraint.id2 % verticesPerHair, constraint.d); } barrier(CLK_LOCAL_MEM_FENCE); if((idl%2) == 0) { distanceConstraint(localpos, constraint.id1 % verticesPerHair, constraint.id2 % verticesPerHair, constraint.d); } barrier(CLK_LOCAL_MEM_FENCE);}[/source]

constraint is what was generated for this vertex in buildConstraints(). The barrier() function is there because the algorithm runs in parallel, so it solves the constraint for odd vertices then waits for all threads to complete and then solves the constraints for even vertices.

the function distanceConstraint() looks like this:
[source lang="cpp"]// satisfy distance constraintsvoid distanceConstraint(__local HairVertex* pos, int idx1, int idx2, const float conDist){ // vector from vertex 1 to vertex 2 float4 difference = pos[idx2].position-pos[idx1].position; // By what amount move which vertex float2 cFactors = constraintFactors(idx1, idx2); // length of the vector between the vertices float len = length(difference); // distance the vertices must move towards each other float distance = len-conDist; // no corruption through div by zero len = max(len,1e-7f); // move pos[idx1].position += cFactors.x*distance/len*difference; pos[idx2].position += cFactors.y*distance/len*difference;}[/source]


constraintFactors() returns factors that determine how to handle vertices depending on whether or not they are roots:
[source lang="cpp"]// for given indices id1 and id2 return constraint factorsfloat2 constraintFactors(int id1, int id2){ if(id1 != 0) { // first vertex is not a root if(id2 != 0) { // both vertices not a root return (float2)(0.5f, -0.5f); } else { // first vertex is not root, second is root return (float2)(0.0f, 0.0f); } } else { // first vertex is root if(id2 != 0) { // first vertex is root, second is not return (float2)(0.0f, -1.0f); } else { // both vertices are roots (shouldn't happen) return (float2)(0.0f, 0.0f); } }}[/source]

I have looked through NVIDIAs DirectX implementation of the simulation (NVIDIAs SDK 11) and the only deviation from the constraint code was that when the first vertex is not root and the second is, the factors were (1.0f,0.0f). I am however not certain how the constraints were generated, so I predict that my mistake is somewhere there.

The actual problem is that after a while the simulation goes full spaghetti mode... To be a bit more precise - the control vertices seem to move from the root towards the tip and the tip seems to move away in the direction the force is applied... Here's what it looks like in point mode:

http://sw-ores.de/pr...s/spaghetti.mkv

I have no clue what I'm doing wrong

Posted Image

Flickering triangle

10 October 2008 - 04:36 AM

Hello GameDev What is this? Thank you in advance

[Texture Mapping] Relation between float and short Texture coordinate formats?

09 October 2008 - 12:27 AM

Hi GameDev how do float and short texture coordinates relate to each other in OpenGL? It appears to me as if openGL tried to map the texture between 0 and 1. even with short. How do I change the range?

[VBO] One big buffer for everything

06 October 2008 - 06:17 AM

Hello GameDev I've got a simple but working 3D-Renderer based on VBOs. I've had a single Buffer for each vertex attribute before. Here's the code: rendering:
void ALSAGE_OGL::drawMesh(int Id,AL_Vector pos,AL_Vector scale, int TexId)
{
    glScalef(scale.x,scale.y,scale.z);
    glDisable(GL_LIGHTING);
    glEnableClientState(GL_TEXTURE_COORD_ARRAY);
    glEnableClientState(GL_COLOR_ARRAY);
    glEnableClientState(GL_VERTEX_ARRAY);
    if(TexId != -1) glBindTexture(GL_TEXTURE_2D, textures[TexId].Name);

    if(lastMesh != Id)
    {

        glBindBuffer(GL_ARRAY_BUFFER,meshes[Id].VBO[2]);
        glColorPointer(4, GL_UNSIGNED_BYTE, 0,0);

        glBindBuffer(GL_ARRAY_BUFFER,meshes[Id].VBO[3]);
        glTexCoordPointer(2, GL_FLOAT, 0,0);

        glBindBuffer(GL_ARRAY_BUFFER,meshes[Id].VBO[4]);
        glNormalPointer(GL_FLOAT, 0,0);


        glBindBuffer(GL_ARRAY_BUFFER,meshes[Id].VBO[0]);
        glVertexPointer(3, GL_FLOAT, 0,0);

        lastMesh = Id;
    }
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,meshes[Id].VBO[1]);
    glDrawRangeElements(GL_TRIANGLES, 0,meshes[Id].numTris*3-1,meshes[Id].numTris*3,
                        GL_UNSIGNED_INT,NULL);
    glBindTexture(GL_TEXTURE_2D, 0);
    glBindBuffer(GL_ARRAY_BUFFER,0);
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0);
    glDisableClientState(GL_TEXTURE_COORD_ARRAY);
    glDisableClientState(GL_COLOR_ARRAY);
    glDisableClientState(GL_VERTEX_ARRAY);
}
loading:
int ALSAGE_OGL::loadMesh(AL_Char* filename)
{
    AL_Mesh* mesh = new AL_Mesh;
    AL_byte nBufs = 5;
    mesh->loadFromFile(filename);
    meshes[numMeshes].numVert = mesh->vertCount();
    meshes[numMeshes].numTris = mesh->TrisCount();


    glGenBuffers(nBufs, meshes[numMeshes].VBO);

    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, meshes[numMeshes].VBO[1]);
    glBufferData(GL_ELEMENT_ARRAY_BUFFER, meshes[numMeshes].numTris*sizeof(AL_Triangle),
                 mesh->getTrianglePointer(), GL_STREAM_DRAW);

    glBindBuffer(GL_ARRAY_BUFFER, meshes[numMeshes].VBO[2]);
    glBufferData(GL_ARRAY_BUFFER, meshes[numMeshes].numVert*sizeof(AL_VertCol),
                 mesh->getColorPointer(), GL_STREAM_DRAW);

    glBindBuffer(GL_ARRAY_BUFFER, meshes[numMeshes].VBO[3]);
    glBufferData(GL_ARRAY_BUFFER, meshes[numMeshes].numVert*sizeof(AL_VertTex),
                 mesh->getUVCoordPointer(), GL_STREAM_DRAW);

    glBindBuffer(GL_ARRAY_BUFFER, meshes[numMeshes].VBO[4]);
    glBufferData(GL_ARRAY_BUFFER, meshes[numMeshes].numVert*sizeof(AL_VertNor),
                 mesh->getNormalPointer(), GL_STREAM_DRAW);

    glBindBuffer(GL_ARRAY_BUFFER, meshes[numMeshes].VBO[0]);
    glBufferData(GL_ARRAY_BUFFER, meshes[numMeshes].numVert*sizeof(AL_VertCrd),
                 mesh->getCoordPointer(), GL_STREAM_DRAW);
    numMeshes++;
    delete mesh;

    return numMeshes-1;
}
Now I want to pack those Buffers into one single Buffer for better performance. I dont want to use interleaved arrays, but packed arrays. first I did this:
glBufferData(GL_ARRAY_BUFFER, 
                 meshes[numMeshes].numVert*sizeof(AL_VertTex),
                 0, 
                 GL_STREAM_DRAW);
    glBufferSubData(GL_ARRAY_BUFFER,
                    0,
                    meshes[numMeshes].numVert*sizeof(AL_VertTex),
                    mesh->getUVCoordPointer());
Everything is ok, it renders fine. Then I tried this: (lines 17-22 in loading):
glBufferData(GL_ARRAY_BUFFER, 
                 meshes[numMeshes].numVert*(sizeof(AL_VertCol)+sizeof(AL_VertTex)),
                 0, 
                 GL_STREAM_DRAW);
    glBufferSubData(GL_ARRAY_BUFFER,
                    0,
                    meshes[numMeshes].numVert*sizeof(AL_VertCol),
                    mesh->getColorPointer());

    /*glBindBuffer(GL_ARRAY_BUFFER, meshes[numMeshes].VBO[3]);
    glBufferData(GL_ARRAY_BUFFER, 
                 meshes[numMeshes].numVert*sizeof(AL_VertTex),
                 0, 
                 GL_STREAM_DRAW);*/
    glBufferSubData(GL_ARRAY_BUFFER,
                    meshes[numMeshes].numVert*sizeof(AL_VertCol),
                    meshes[numMeshes].numVert*sizeof(AL_VertTex),
                    mesh->getUVCoordPointer());
changing the lines 13-17 in rendering to this:
glBindBuffer(GL_ARRAY_BUFFER,meshes[Id].VBO[2]);
        glColorPointer(4, GL_UNSIGNED_BYTE, 0,0);

        //glBindBuffer(GL_ARRAY_BUFFER,meshes[Id].VBO[3]);
        glTexCoordPointer(2, GL_FLOAT, 0,
            BUFFER_OFFSET(meshes[numMeshes].numVert*sizeof(AL_VertCol)));
the result: http://b.imagehost.org/view/0899/bug.png the Textur Coordinates are completely off. So, what did I do wrong?

[Vista] SIGSEGV on glDrawRangeElements

27 September 2008 - 10:40 AM

Hello GameDev! I have some code which runs perfectly on my Computer with Windows XP or Linux, but crashes on a notebook with Windows Vista. The crash occurs during the second call to glDrawRangeElements with the same buffers bound. I am notified by the debugger that the SIGSEGV signal comes from the ig4icd32.dll in the System32 folder, which looks like a video card driver dll to me. Is this some hardware or Vista issue, or is this crash possible due to unclean code? What could be wrong with my Buffers? The video card on my Computer is a NVIDIA GeForce FX5500 The Notebook has an integrated Intel GMA X3100 [EDIT] Commenting out glDrawRangeElements doesnt change anything - the Application SIGSEGVs in ig4icd32.dll on different OpenGL calls (This time glBindBuffer)[/EDIT] [Edited by - AzraelilMeraz on September 27, 2008 5:08:06 PM]

PARTNERS