#### Archived

This topic is now archived and is closed to further replies.

# Metaballs - Normals

This topic is 5759 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I''m sorry - almost forgot! (School is demanding most of my energy and concentration at the moment)

The error:
Before running the program, I set my desktop to 24bits color. Then run...and get the error... Can you have windows with other pixelformats, or is fullscreen the only solution here?
It''s not much, but I hope it''s enough...

quote:

I have downloaded the metaballs demo FireFly mentioned. If you want I can mail it to you.

Yes, I''d appreciate it! (I''ve tried it about 5 times, but the slow modem just can''t handle it )

quote:

hehe this post is long enough

It''s a long post alright! There are posts much longer than this one, but they are all about opinion. This is probably one of the longest posts about a technical topic - feels good taking part in!

##### Share on other sites
Hey WitchLord, just a minute, hehe!

About the sharing of vertices: Yes, I do! Only one transformation per vertex. (Hey - an advantage of doing your own software rendering! But then again: probably one of the very few advantages).
Every polygon contains a list of edges, that each point to a vertex.

There''s one thing I still haven''t optimized: - the idea WitchLord pointed out - calculating only the vertices that are close enough to one of the metaballs!
I still haven''t found a good implementation technique to solve this. My source contains lots of pointers (like the Vertex pointers), and this makes it all a bit more complicated...

I also fill the energy grid in a loop, before interpolating the vertices in a seperate loop. This is because the interpolation requires values that are 1 grid position further than the current vertex position in the grid.
Setting up the loop a bit different maybe will allow the merge of the two loops?...

- Bas

##### Share on other sites
I can''t figure out why you can''t run my program in 24bit color. If you set the desktop to 24bit then the program should also run in 24bit in windowed mode. D3DX should automatically choose the pixelformat that your desktop is using.

Could you try putting these lines in the constructor for CMetaBallApp:

m_bFullscreen = true;
m_nColorBits = 24;

And then tell me if it works?

---

I think my triangles share the same amount of vertices that yours do. But what about the triangles that are so small that it''s vertices are virtually the same, it would be neat if these could be removed and the vertices fused together. This is what I will try to implement.

I too use two loops, as I''m sure you have already figured out. I''m actually using three but the third loop that calculates the normals can easily be merged with the second that calculates the vertices.

The metaballs might also be sped up by using raycasting. That way you don''t need to compute the inside of the metaballs, and not the part that is obscured by them either. I''m not sure how much computations the raycasting would need though, it might not be worth it.

- WitchLord

##### Share on other sites
Hey WitchLord:
I've tried re-compiling your demo with the changes, but I'm missing the <d3dx.h> header. Maybe I'll install the DirectX SDK in a few days when I have some more time. (Hope you won't hold it against me).

Hey, good idea: Removing very small polygons!
Because of the marching cubes, we have "strange" polygon formations. Isn't it too hard to alter the algo at this point? Maybe the only way is to take the 1-triangle cubes, calc the center and connect neighbour vertices. But then you have to look back and forward in your vertex list. (forward will be trouble!).
To avoid this, you can use the previous algo (walk through the entire loop), and thereafter do the entire loop again to cut out small vertices. This way you can alter forward vertices too.
Isn't the cure getting worse than the ... (I not know, I from Holland)

Raycasting, sounds real interresting!
Is it possible to lose the polygons behind other polygons too? This way even optimizing the polygon count.
Raycasting normally is way too slow, but in this case - using a energy grid - it provides some good oppertunities, if you ask me!

I say: Go for the raycasting !

Good luck,
- Bas

Edited by - BasKuenen on June 1, 2000 10:42:27 PM

##### Share on other sites
No, I won''t hold it against you.

I believe it is possible to remove the small polygons already in the first loop. I''ll try to implement it after I have put up my next tutorial.

Yes, raycasting will remove polygons behind other polygons too. But I''ll leave it up to you to try this technique.

- WitchLord

##### Share on other sites
At the moment I''m still a bit busy with school assignments.
It''s almost the end of the school year, so all assignments need to be finished at the same time, at the last moment (you probably know what I''m talking about)

Here''s a nice article that may be of use implementing raytraced optimizing: Generic Collision Detection For Games Using Ellipsoids
Well, maybe the ellipsiods aren''t important, there''s a nice explaination on sphere-line collision.
Just sharing thoughts...

Will ressurect this thread in a few weeks...
- Bas

##### Share on other sites
I should have said a few months
I know - I''m a little late, but I''m now working on other stuff now.

For me - this was one of the best threads here on gamedev
Just ressurecting for the fun of it

##### Share on other sites
Hey guys. Great thread! I am working on a game that uses metaballs HEAVILY. In fact, it will use ONLY metaballs, plus some simple polys for billboarding effects. Everything else will be "metaballs only".

I have a question. Bas, u stated somewhere in the thread that you use x and y normal coordinates to do environment mapping. How does that work? Is this "spherical" mapping? I mean, do u turn your background texture into a spehere before you do this? And why does this work, I can''t understand why using normals for texture coordinates would give good results (although it''s great, it saves A LOT of computational power).

Also, another thing I noticed... If all objects are based on metaballs, your collision detection code is already "built in".

That''s some sexy stuff

##### Share on other sites
No, its not spherical mapping. Or is it? The effect is the same, and yes you can turn your texture into a sphere (but is not needed - I dont) to get a better effect.
When I lookup sphere-mapping, all I see is a lot of formula''s, but why not do it faster and easier

I first transformed the vertex normals to camera space.
After that you can use only the x & y parts of the transformed vector as texture coords.

The texture gets stretched the more the polygon is "side-facing".
If its facing the camera all looks great, and since you cannot see the side of an object (obviously) from camera view - this is no problem.

Why does it work? It only works good for round objects. Metaballs are perfect. I also used it for phong mapping (lighting).

Good luck with your metaballs! They rule!

##### Share on other sites
thats how i do it:

 ` for (long i=0;i

orgball are the coords of each ball q. ia(i) are all my vertices. nx,ny,nz is the correct vertex normal that comes out. should be normalised. at the end i compute enviroment mapping texture coords.

algo is by a guy from the swiss demo-group calodox. their algo worked, but wasnt quite correct, i fixed it, so this one should be perfect.

here's my ultra-fast opengl-metaballs demo: http://quixoft.hypermart.net/metagl.zip

Edited by - quix_ on January 14, 2001 8:34:49 AM

• 10
• 10
• 12
• 10
• 33