pondwater

Members
  • Content count

    90
  • Joined

  • Last visited

Community Reputation

191 Neutral

About pondwater

  • Rank
    Member
  1. @nonoptimalrobot   Sorry the colouring algorithm (which your right, probably isn't even the correct term) is used to compute a value in the objective function of the annealing algorithm. The annealing is used to optimize the placement of these ellipses over a mesh. I'm using a GPU approach similar to the on used in:   Han, Y., Roy, S., & Chakraborty, K. (2011, March). Optimizing simulated annealing on GPU: A case study with IC floorplanning. In Quality Electronic Design (ISQED), 2011 12th International Symposium on (pp. 1-7). IEEE.   It works very nicely so far, except that i've hit a bottle neck on this particular area calculation. When i mentioned: "Currently I run 256 to 1024 compute shader invocations depending on the resolution of my current approach, each calculating the overlapped area asyncronously." I meant that each compute shader invocation is computing the area for their respective energy value from the objective function.   @unbird   Here is an image example:   Top (Side view):       Red = original mesh surface     Blue = projected mesh surface     Black = disks and their projections   Bottom (Top-Down view):       Blue = projected surfaces     Green = uncovered area I want to calculate  
  2. Currently I run 256 to 1024 compute shader invocations depending on the resolution of my current approach, each calculating the overlapped area asyncronously.   I was considering using a rendering type approach as you suggested, but drawing and counting pixels instead of simply querying. Do you think there would be a performance gain in what would effectively be 256-1024 sequential occlusion queuries per annealing iteration?   I do not have any experience with the ARB_occlusion_query OpenGL extension so I do not know anything about its performance.
  3. I have a relatively flat mesh that can be projected onto a plane. The area of this projected mesh can be easily calculated.   I will be projecting disks (cylinders with 0 height) onto this projected mesh which will result in ellipsoids.   I need to find the area left uncovered.   Originally to accomplish this I quantified the mesh into n uniform pieces and used a colouring algorithm where each dsicrete piece is marked as overlapped or not overlapped. Then I sum the areas of the overlapped pieces and subtract those from the total areas of the projected mesh.   This is implemented in a compute shader as part of an energy calculation for simulated annealing.   Unfortunately it is does not yield suitable performance at high resolutions, nor the accuracy required at lower resolutions.   Is there any other way to calculate this?   EDIT: Is there a name for this type of problem, I do not know what to search to find ideas.
  4. Lets say I concave triangular mesh surface, M, and convex submeshes, sMi, each of which are generated from the surface of M (so that any shared triangles can be considered identical). Submeshes can overlap eachother.   Lets say we need to find the total area of M that is not part of over of the submeshes.   I would do this by simply iterating over every submesh, and flagging each corresponding triangle on M. Then simply summing up all non-flagged triangles.   This is easy. My problem is a little more complicated.   My submeshes contain not only fully corresponding triangles, but also triangles that have been subdivided into partial triangles. They are generated from circular projections from the centre point normal.   Here is an example of a submesh, with black lines being the triangles from M and the green lines for the cuts of the subdivided triangles. The black circular outlines could be ignored.       Remember that these submeshes can overlap, so therefore the sub triangles could also overlap.   For example, this particular triangle is overlapped by two submeshes, blue and green (purple is the union of both).     Does anyone know of an approach I could look into for this type of problem?
  5. Essentially I would like to create a bi-cubic spline patch, define by four points, however the patch will not be rectangular. Is this possible?   Someone recommended Hermite Splines to me. In 2D it seems that it is defined by two points and 2 tangents. But I can't seem to find resources explaining how to approach it in 3D, let alone for irregular patches.   Could anyone recommend any resources?   Edit:   I've looked around more and the only resource I was able to find talking about BiCubic Hermite splines is "Curves and Surfaces for Computer Graphics" by Salomon, D. found at ftp://89.249.167.25/1000/803161bb9fa79c4f29576ba8b5114838   Section 4.9 has the information on BiCubic Hermite Curves.   Here's essentially the formulas for creating the Hermite Curve spliced together. I find it very difficult to follow as he refers to variables defined in previous chapters that rely on information that does not make sense (to me) in their current formula. Other variables are never defined. And everything is simply defined as a rearrangement of itself.   P(u,w) and aij are apparently from a previous chapter, which are defined as:     I feel like im on crazy pills. Does this make sense to anyone?
  6. Well this is embarrassing, lol. I guess I was really over thinking it.   That works perfectly, thank you very much!
  7. I have a vector A, and I have a plane P consisting of a point (Pp) and normal (Pn)   I need to find the vector B, that is orthogonal to A and also on P.  To prevent infinite solutions, lets make B unit length.   If V is perpendicular to P there are infinite, we can ignore this case, as in my application it will never occur.   As far as I can see in every other case there will be two vectors.   We have 3 unknowns and therefore need three equations:   1) A and B are orthogonal, therefore their dot product is zero       (A.x * B.x) + (A.y * B.y) + (A.z * B.z) = 0   2) If B is in the plane P, then there are orthogonal and the dot product between B and Pn is zero       (B.x * Pn.x) + (B.y * Pn.y) + (B.z * Pn.z) = 0   3) B will be unit length       sqrt(B.x^2 + B.y^2 + B.z^2) = 1   Now I assume I could plug these into a fancy equation solver and find vector B (and its negation), butI was hoping there is a more specific / optimized approach.   Any ideas?    
  8. I'm writing a 2D rendering engine, and currently I have shadow casting on convex shapes generated from the sprite textures. I would like to expand it to concave shapes, so now i need the concave hull of my sprite. Generating the edge vertices isn't difficult, it is assembling them into a coherent ordering that is giving me trouble.       I know there are more than one concave hulls for a generic set of points, but im hoping it is possible to exploit the spatial locality of the points produced from the texture to ensure the correct hull.   At first I was hoping to simply pick a random point, and then choose the closest point and just follow the trail. However, this fails quite easily. I'm wondering if anyone has any knowledge about this or any ideas on how to accomplish this...  
  9. I'm writing a little rendering engine for 2D sprites incorporating dynamic lighting via normal maps. For opaque sprites I’m using deferred rendering. I'm attempting to include forward rendering for transparent sprites.   For the forward rendering, since I cannot take advantage of the depth buffer (since all sprites are on the same Z-plane), I’m not sure how to accumulate lights properly to prevent bright patches where sprites overlap.   What I’ve currently been doing is simply iterating over each transparent object for each light, accumulating the fragments using additive alpha. For this I either turn off depth testing, or set it to GL_LEQUAL. It doesn't really matter because all quads are drawn on the same Z plane so depth testing is irrelevant.   My problem comes into when two transparent objects intersect. Normally for opaque objects the sprite drawn last overwrites whatever is in the G-Buffer, which is what I want. However, for forward rendering, the additive blending of the lights causes bright patches where the sprites overlap.   So my problem boils down to: for individual transparent sprites the light must be accumulated with alpha blending, and between transparent sprites they should be combined using conventional alpha*S + (1- alpha)*D blending.   The only way I can think to accomplish this is to clear and fill an individual FBO for each transparent sprite. I clear the FBO, render an individual transparent object accumulating each light with additive blending, then alpha blend that with my final buffer from my deferred renderer. Clear the FBO, and begin rendering the next transparent object, and so forth.   This seems horrifically inefficient. What is the proper way to do this?
  10. I'm attempting to capture the back buffer on a texture, and then in an attempt to make sure its working, simply redraw the texture on the screen. I've copied the appropriate sections of code line for line from a tutorial example. The example compiles and works, mine doesn't. I can't for the life of me understand why.   Here the is initialization code that creates the empty texture:   glClearColor( 0.0f, 0.0f, 0.0f, 1.0f); // set clear color to black unsigned int *empty = new unsigned int[((glutGet(GLUT_WINDOW_WIDTH) * glutGet(GLUT_WINDOW_HEIGHT))* 4 * sizeof(unsigned int))]; glEnable( GL_TEXTURE_2D ); glGenTextures( 1, &texture_buffer ); glBindTexture( GL_TEXTURE_2D, texture_buffer); glTexImage2D( GL_TEXTURE_2D, 0, 4, glutGet(GLUT_WINDOW_WIDTH), glutGet(GLUT_WINDOW_HEIGHT), 0, GL_RGBA, GL_UNSIGNED_BYTE, empty); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); glBindTexture(GL_TEXTURE_2D, 0); glBindTexture( GL_TEXTURE_2D, 0); glDisable(GL_TEXTURE_2D); delete [] empty;   Here is the code that draws the scene, grabs it off the back buffer, then for testing, attempts to draw the texture to the screen:   glClear( GL_COLOR_BUFFER_BIT ); state->draw(); glFlush(); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, texture_buffer); glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 0, 0, glutGet(GLUT_WINDOW_WIDTH), glutGet(GLUT_WINDOW_HEIGHT), 0); glClear( GL_COLOR_BUFFER_BIT ); glBegin( GL_QUADS ); glTexCoord2f(0,1); glVertex2f(WORLD_LEFT, WORLD_TOP); glTexCoord2f(1,1); glVertex2f(WORLD_RIGHT, WORLD_TOP); glTexCoord2f(1,0); glVertex2f(WORLD_RIGHT, WORLD_BOTTOM); glTexCoord2f(0,0); glVertex2f(WORLD_LEFT, WORLD_BOTTOM); glEnd(); glBindTexture(GL_TEXTURE_2D, 0); glDisable(GL_TEXTURE_2D); glutSwapBuffers();   Anything seem out of place?  
  11. I am modifying some existing code that has a 2D orthographic projection set as:   glOrtho(-0.02, 1.02, -0.02, 0.68, -1, 1);   I have never had an issue with transfering pixel coords (mouse clicks specifically) into world space when the left/right, top/bottom values of the frustum are symmetrical.   I simply convert the pixel values to a normalize -1 to 1 'window coord' centered in the middle of the window and then scale those values by the right/top values given to my frustum.   But for some reason I am getting really confused when converting to this asymmetrical frustum.   This should be simple, but I can't for the life of me figure it out!   EDIT:   I feel silly now, I figured it out: // normalized window coords centered on bottom left of window float x = (pixel_x / window_width); float y = (window_height - pixel_y)/(window_height); // world coords by scaling and translating x = x*(frustum_right - frustum_left ) - frustum_left; y = y*(frustum_top - frustum_bottom) - frustum_bottom;
  12. I'm confused with how Assimp deals with the Bind Shape Matrices (not to be confused with bind transforms for bones/joints) for COLLADA models.   Normally this bind shape matrix is baked into the vertices, either before or during the skinning calculations. But for some reason Assimp bakes it into the inverse binds.   In the ColladaLoader.cpp: // apply bind shape matrix to offset matrix aiMatrix4x4 bindShapeMatrix; bindShapeMatrix.a1 = pSrcController->mBindShapeMatrix[0]; bindShapeMatrix.a2 = pSrcController->mBindShapeMatrix[1]; bindShapeMatrix.a3 = pSrcController->mBindShapeMatrix[2]; bindShapeMatrix.a4 = pSrcController->mBindShapeMatrix[3]; bindShapeMatrix.b1 = pSrcController->mBindShapeMatrix[4]; bindShapeMatrix.b2 = pSrcController->mBindShapeMatrix[5]; bindShapeMatrix.b3 = pSrcController->mBindShapeMatrix[6]; bindShapeMatrix.b4 = pSrcController->mBindShapeMatrix[7]; bindShapeMatrix.c1 = pSrcController->mBindShapeMatrix[8]; bindShapeMatrix.c2 = pSrcController->mBindShapeMatrix[9]; bindShapeMatrix.c3 = pSrcController->mBindShapeMatrix[10]; bindShapeMatrix.c4 = pSrcController->mBindShapeMatrix[11]; bindShapeMatrix.d1 = pSrcController->mBindShapeMatrix[12]; bindShapeMatrix.d2 = pSrcController->mBindShapeMatrix[13]; bindShapeMatrix.d3 = pSrcController->mBindShapeMatrix[14]; bindShapeMatrix.d4 = pSrcController->mBindShapeMatrix[15]; bone->mOffsetMatrix *= bindShapeMatrix;   Is there a specific reason for this? Because this results in a unique set of inverse binds for EACH MESH. So each set of inverse binds is stored with their respective mesh, instead of a global set with the skeleton.   I cannot for the life of me understand why Assimp does this. Why not bake them with the vertices instead, or simply store them somewhere.
  13. Find yourself an XML parser, I used rapidxml from http://rapidxml.sourceforge.net/ , it seemed lightweight, then again I haven't much experience parsing XML files so definately look around yourself, here would be a good starting point: http://stackoverflow.com/questions/170686/best-open-xml-parser-for-c Definately look at wazim's article for a nice introduction to the .dae file format and how to approach it.
  14. Here is the link you were talking about: [url="http://www.wazim.com/Collada_Tutorial_1.htm"]http://www.wazim.com..._Tutorial_1.htm[/url] In any case, you want to have some sort of asset conditioning pipeline where you take in an interchange format like .dae and bring it into your own runtime format for your game/engine. How your engine manages its content at runtime is another story. I attempted to go the approach you did and write my own collada parser. I got it working, but it wasn't robust at all. I find the incredible amount of a variance in the .dae format extremely frustrating and ended up switching to using Assimp for my .dae models. It is great. Nice clean interface, very robust, and much faster than my own parser for optimizing vertex indices. I'd recommend attempting to roll your own like I did, once you get it working for just 1 instance of the .dae format you will understand the format so much more and will feel more comfortable transitioning to Assimp. Here is a fantastic tutorial if you decide to switch to Assimp: [url="http://ogldev.atspace.co.uk/www/tutorial38/tutorial38.html"]http://ogldev.atspac...tutorial38.html[/url] It also teaches you how to skin on the GPU, instead of the CPU like wazim's.
  15. Mesh and Skeleton organization

    I'm trying to figure out under what circumstance unique inverse binds for each mesh would be needed? The only reason I have them is because Assimp concatenates the COLLADA bind shape matrices into them, which for the life of me I can't understand why this decision was made. I just don't understand why they aren't stored under the respective mesh aiNode->mTransformation...