Jump to content
  • Advertisement

0x00000001

Member
  • Content Count

    9
  • Joined

  • Last visited

Community Reputation

131 Neutral

About 0x00000001

  • Rank
    Newbie
  1. 0x00000001

    Efficient data packing

      Sorry for the late reply. Just got a chance to work on this project.  Thanks for all the info it was a great read and highly informative.  I just tried a quick solution of splitting the data. So made a separate structure to hold particle position and colour only and passing that to the GPU. This turned out to be worse performance haha (probably due copying data from the class).   Anyway I think im just gonna leave this like it is and attempt to move all the calculations on GPU instead that should definitely boost performance.   Thanks again.
  2. 0x00000001

    Metaballs with Marching cubes

      Wow such a simple fix haha thanks man it worked. Although the resolution automatically decreases when I increase the scale and range. In your opinion what would be the best way to specify those x,y,z values. Should I leave it like it is or is there an alternative method you think would be better.   Again thanks for the help, appreciate it     No worries! Yes, these two parameters aren't ideal because they both influence resolution and range. You can however define two parameters which will only influence one of the two variables each, by using the concept of "point per unit length" where a point is where you evaluate the metaballs to create a control point for the marching cubes algorithm. It would go as follows: Parameters: ----------- resolution [points/length] range [length] offset [length] (for simplicity each parameter applies to all three dimensions, separate as needed) Notes: ------ Number of points in each dimension is just resolution * range [points/length * length = points] Code: ----- for (i = 0; i <= resolution * range; ++i) { for(j = 0; j <= resolution * range; ++j) { for(k = 0; k <= resolution * range; ++k) { // corresponds to point (i, j, k), use resolution and offset float x = (float)i / resolution + offset; float y = (float)j / resolution + offset; float z = (float)k / resolution + offset; // evaluate point at (x, y, z) } } } Note that now, if you modify the range then the resolution remains unchanged (indeed, you will simply have *more* points to evaluate at the same resolution) and similarly changing the resolution does not modify the effective range (the points will just be more spread out over the original range, trading accuracy for speed). The code above will generate control points between "offset" and "range + offset" inclusive (in every dimension), so tweak as needed. Note you can of course have multiple parameters, one for each dimension if you want to (like offsetY, resolutionX, etc..)   You can have fractional resolution and range by truncating (or rounding? not sure, try it out) the value "resolution * range" inside your for loop. In fact an even more elegant implementation - and perhaps more intuitive - would go as follows: for (float x = offset; x <= range + offset; x += resolution) { for (float y = offset; y <= range + offset; y += resolution) { for (float z = offset; z <= range + offset; z += resolution) { // evaluate point at (x, y, z) } } } Which is really saying, start at "offset", and increment the control points in steps of "resolution" until you reach "range + offset". After all, for loops aren't only for integers  (though this may be less optimized and might be subject to floating-point inaccuracies, try it out anyway)       Sorry for the late reply, I work on this as a side project, anyway I tried your methods and just couldn't get it to work haha :( the balls just don't show at all and Im not really fussed to debug and see why. I just left it like it was previously.    I added an update function and random velocity to see the performance and it is not good. I can only do about 32 metaballs with voxelisation at 32^3. It's most likely my bad coding haha so I need to research further into rendering fluid.   Thanks for all the help.
  3. 0x00000001

    Metaballs with Marching cubes

      Wow such a simple fix haha thanks man it worked. Although the resolution automatically decreases when I increase the scale and range. In your opinion what would be the best way to specify those x,y,z values. Should I leave it like it is or is there an alternative method you think would be better.   Again thanks for the help, appreciate it :)
  4. 0x00000001

    Metaballs with Marching cubes

      The isovalue really tells you where the limit between solid regions and empty space (the "contour") is to be located. If it is too high or too low, everything will be considered solid or empty, and as you vary it more (or less) of the density field is considered solid, which leads to interesting results. I don't think there is really any "good" value for an isovalue, it depends a lot on your metaball implementation and on the effect you want to achieve, so I wouldn't be worried if you need to tweak it a bit to get good-looking metaballs.         This shouldn't be a problem with the metaball algorithm itself. I would check your marching cubes code to verify it is working outside the 0..1 unit cube. If it isn't, then obviously everything outside it will never be polygonized, and therefore never rendered. It's not possible to polygonize things arbitrarily far away, unfortunately, there is always a tradeoff between range and resolution, because marching cubes is a discrete algorithm. Other methods like ray marching can sort of scale to arbitrary distances, but have their own set of drawbacks.   To make sure your balls can move around, you can compute the bounding box of your metaballs and use that to define the range for the marching cubes algorithm, but you can't have them go too far away from one another else you will lose in accuracy. You might wonder why, since in that case the marching cubes algorithm would spend most of its time on empty voxels, and, yes, it is possible to do better, but it gets rather nasty as you then need to find an approximate bounding box for every set of connected metaballs and run the marching cubes algorithm on each of them, separately, which sounds great on paper but isn't that efficient in practice. I think most people deal with this by making reasonable tradeoffs between the size of their worlds and how precise the polygonization should be.         Marching cubes can be rather expensive, especially if you want really good resolution. Looking at your screenshot, your mesh is quite smooth, meaning it's probably trying to polygonize up to 128x128x128 voxels, which is *a lot* especially done on the CPU. If you wanted to go interactive - it is possible - you would move the marching cubes code to the GPU, in a shader, and perhaps scale back the resolution a notch. It isn't too hard at all, in fact, and the speed boost is huge since doing the same calculation over a lot of different locations is what the graphics card does best. Then you can add lots of algorithmic optimizations to avoid calculating every single sphere's potential for each voxel - which obviously won't do when you start rendering dozens of metaballs - by using techniques like octrees or spatial hashing. It can get rather efficient, really, but it has its limits. As we all know, the really cool stuff is done by cleverly combining different techniques    I don't know if metaballs would be my first choice for fluid rendering, though. It stills seems like an inefficient method overall, I would just use a grid-based fluid dynamics system if I wanted to do it properly (though a fractal heightmap or even FFT water works well for static water bodies). Marching cubes itself sounds good to display the results, however. Remember to dissociate what you are rendering (metaballs) from how you are rendering it (marching cubes), the two have nothing in common except being often used together.   ... great, now I want to write a metaball renderer      Haha you should write one then you can help me make mine better :)    Anyway I have been messing about with it for a while but still cannot draw outside the 0->1 range. I found this source online  https://github.com/kamend/3D-Metaballs-with-Marching-Cubes/tree/master/src (Note its not mine) to try and see how they control range and I see it uses gridX gridY gridZ which is basically the same thing as my m_volume_depth etc...   Can anyone have a look through that source and see which bit alters range so I can draw more than a unit cube.   Thanks
  5. 0x00000001

    Metaballs with Marching cubes

    Thanks guys got it working :) Although I have a couple of queries     After changing the equation, I had to reduce the iso value quite alot to 0.008 is it normal to be that low?   Also I can only specify metaball positions between 0 -> 1 anything outside doesn't get drawn and I want to be able to draw them anywhere on screen, Im not sure which part of the algorithm decides the range?   Finally just a general question, eventually I wanted to use this to render fluid. Some people seem to use marching cubes to render fluid, however It took my app like 4seconds just to render those four static metaballs. What kind of technique is good for fluid rendering?   Thanks for the help guys.
  6. 0x00000001

    Metaballs with Marching cubes

        I have marching cubes already implemented and thats how its drawing 1 sphere from the equation.   I dunno how to explain this but basically how would I pass the metaball data to draw multiple spheres instead of just 1. The marching cubes algorithm works and my equations work.   For example if I just passed the equation of a circle to my volume data m_volumeData[i*m_volume_width*m_volume_height + j*m_volume_width + k] = x*x + y*y +z*z - r*r    It draws a sphere perfectly, Im just unsure how would I pass multiple spheres to m_volumeData.
  7. Hi everyone. I have implemented the marching cubes algorithm and I am trying to render metaballs.   The problem I have is no matter how many metaballs I define, it will always only draw 1, so I guess my question is how to properly integrate the metaballs data with marching cubes.   Each metaball  has a position and radius and then stored in a vector container.   Then in my volume function for marching cubes I have this: for(i=0; i<m_volume_depth; ++i) { for(j=0; j<m_volume_height; ++j) { for(k=0; k<m_volume_width; ++k) { float x = (float)i/(m_volume_depth) - 0.5; float y = (float)k/(m_volume_width) - 0.5; float z = (float)j/(m_volume_height) - 0.5; Vec3 p = Vec3(x,y,z); m_volumeData[i*m_volume_width*m_volume_height + j*m_volume_width + k] = meta(p); } } } Then here is the meta function: float sum=0; for(int x=0; x<m_metaBalls.size(); ++x) { sum += m_metaBalls.at(x)->equation_sphere(_pos); } return sum; Here is my equation code: return ( (_pos.m_x - m_position.m_x)*(_pos.m_x - m_position.m_x) + (_pos.m_y - m_position.m_y)*(_pos.m_y - m_position.m_y) + (_pos.m_z - m_position.m_z)*(_pos.m_z - m_position.m_z) - m_radius*m_radius); _pos is the passed in position; m_position is the origin of the metaball;   I don't think there is anything wrong with the metaball class becuase it draws a sphere perfectly but only 1.   How would you integrate the metaball data with marching cubes?   If you require any other parts of the code then please let me know, this has been driving me crazy for a while.
  8. 0x00000001

    Efficient data packing

      I'll try splitting the buffer and see what happens, but just curious about something. Every single example/tutorial I have seen always stores the object in a vector but never a pointer to that object, why is that? std::vector<particle *> /////////<--------------------------- would this make no performance difference.       Thanks for that, I am now passing all glm::vec3 by const reference and even did it for the floats. It managed to get like 3 frames per second more, nothing major but still better than nothing lol.
  9. Hi everyone I just had a few questions about OpenGL and performance.   I made a very simple 2D particle system, all the particles get attracted to a fixed point using gravitational attraction equations, so nothing too intensive.   For my data I used structs and packed it into a single buffer. typedef struct particle { GLfloat age; GLfloat pos[2]; GLfloat vel[2]; GLfloat col[3]; void setPosition(float x, float y) { pos[0] = x; pos[1] = y; } void setColour(GLfloat r, GLfloat g, GLfloat b) { col[0] = r; col[1] = g; col[2] = b; } void setVelocity(GLfloat x, GLfloat y) { vel[0] = x; vel[1] = y; } }particle;  I store the particles in a vector m_particles.resize(n); std::vector<particle>::iterator it; for(it = m_particles.begin(); it < m_particles.end(); ++it) { /// set position etc.. } Then the usual generating, binding and drawing vertex arrays code. All the calculations are done on CPU and I could get about 250,000 particles with good framerate.     Now to clean up the code and add more functionality, I decided to make a particle class. class particle { public: particle(glm::vec3 _pos, glm::vec3 _vel, glm::vec3 _col, float _mass); ~particle(); glm::vec3 getPosition() { return m_position; } glm::vec3 getVelocity() { return m_velocity; } void setColour(float _r,float _g, float _b) { m_colour.x = _r; m_colour.y = _g; m_colour.z = _b; } private: glm::vec3 m_position; glm::vec3 m_velocity; glm::vec3 m_colour; glm::vec3 m_force; glm::vec3 m_acceleration; float m_mass; }; Then storing particles in a vector m_particles.reserve(n); ////////resize crashed the program so used reserve instead for(int i=0; i<n; ++i) { ///setting random values for each param particle p = particle(pos,vel,col, mass); m_particles.push_back(p); } All the other code stays the same and I only needed to change the pointers to point to the correct offset in the buffer etc...   glVertexAttribPointer(col, 3, GL_FLOAT, GL_TRUE, sizeof(particle), (GLvoid *)NULL + (6*4)); //colour data   With this system I could only get about 80,000 particles with similar framerate and I kinda confused why.   There is extra private members in the class like force and acceleration that get passed into the single buffer but I removed them and still no difference. But is it ok if I leave that data in the buffer even if I never use it on the gpu side?   Should I always use structs for passing data to opengl?   My vector container holds the object: std::vector<particle> m_particles;   I tried making it a pointer to the object but then I couldn't correctly pass the data to opengl. It would draw garbage, would making it a pointer even help in performance? std::vector<particle *> m_particles;   Also I am using one buffer, is it better to split the buffers up so like a separate buffer for position,velocity,colour etc...    Sorry for lots of questions I have tried researching but my problem is not a common error.   Thanks for your time.  
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!