Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 07 Nov 2010
Offline Last Active Nov 23 2015 03:23 PM

Topics I've Started

Smoothed Particle Hydrodynamics

22 September 2015 - 05:57 PM

Hello all!


I'm pretty much at my wit's end with SPH. I've implemented incompressible Eulerian fluid simulations before, but this is proving to be a bit more challenging.


So I just want to make sure I have everything down so that I'm not mistaking everything horribly. From what I understand, the algorithm for just basic SPH is:

for each particle
    calculate density and pressure
for each particle
    calculate gradient of pressure and apply it to the particle's velocity
    apply gravity
for each particle
    advect each particle
    enforce boundary conditions


For calculating the density, I use the poly6 kernel:

\[ W( \vec{r} ) = \frac{315}{64 * \pi * h^9} (h^2 - |\vec{r}|^2)^3 \]


For calculating the pressure, I use the gradient of spikey kernel:

\[ W( \vec{r} ) = \frac{15}{\pi * h^6} (h - |\vec{r}|)^3  \]

\[ \nabla W( \vec{r} ) = - \frac{45 * (h - |\vec{r}|)^2}{\pi * h^6} * \frac{\vec{r}}{|\vec{r}|} \]


For calculating pressure I use \( P = k * (\rho - \rho_0) \) , although I've seen a bunch of references to a equation with an exponent of 7 floating around.


All of this checks out with what academic papers say, but in my current implementation, the particles immediately fall to the bottom of the screen and start squeezing together. I wish I had a video of it, but it happens so fast that OBS can't capture it.


I personally suspect that the gradient of the spikey kernel is is the issue, just based off of how the fluid clumps together in strange ways once all the particles fall to the bottom. I'm not applying any near-pressure, just to keep everything simple, but even without it the simulation shouldn't fall to the bottom in 0.5 seconds. Another thing I'm suspicious of is the k, rho_0, and h constants, because it's a little difficult to tune the simulation when it doesn't last very long tongue.png


Does anyone have any experience with SPH who could lend some insight? I implemented this paper but it can't be implemented very easily on the GPU, which is why I'm interested in a more pure SPH approach, so insight on either one of these would be enormously helpful!


I'm attaching the code for anyone who could take the time to look over it. It's 1 file, a couple functions, nothing too big or too fancy fancy of data structures. All computations are O(n^2) and structures are not very optimized, just as a sanity-check.


Thanks in advance! biggrin.png

GI ground truth for comparison

04 May 2015 - 11:41 PM

Hello there!


So I've been wanting to put my renders up next to something for a comparison to test for correctness. It'd really be helpful if I could have a ground-truth form of the renders I'm doing. After all, being "Physically Based" sort of implies that it your renders sort of look correct tongue.png


I've tried some things, and I've heard of other things, 

- I've tried Mitsuba, as per MJP's advice, but I really can't get into it. It's good, but the XML is ridiculously obtuse and word-y.

- I also don't like Blender Render because of all the thousands of presets that do things I'm not accounting for. Basically, it's hard to use for just making a simple test scenes to render.

- I haven't yet tried Pixar's now-public Renderman, but I'm going to try that one out soon.

- I've read that Cinema4D has it all, but it's a little on the extremely pricey side


What sort of GI programs do you use, or would recommend, for comparing renders against?


Thanks in advance! smile.png

Quick question about importance sampling

11 March 2015 - 11:45 PM

Heyyy there!


So, following PBR and the canonical rendering equation, I was trying to figure out how you would go about importance sampling by sampling a BRDF directly. You know, by generating a bunch of random rays in a hemisphere around a surface normal and sampling the BRDF, instead of cosine-weighted sampling or sampling lobes or whatnot. It's not the fastest or best way by a long shot, but it'd be useful when I'm trying out different specular reflectance functions, or playing with different BRDFs in general.


Sampling the BRDF would create a weighted distribution. I'm thinking that you could do something like:

sample N rays and give them weights equal to brdf(...) * cos(angle_between_normal_and_ray_direction)
pick one of the rays randomly, given its weight
the density function for this point is equal to (the weight of the previously picked ray) / (sum of all the weights)
the new ray to cast is the picked ray

However, I don't exactly know if that's right. I haven't really been able to find anything about it. Mostly, people talk about the PDF and how you use it to weigh different rays, but when it comes down to the derivation, it's usually left out, so I don't really know how it's supposed to be calculated, or if I'm treating the concept right. I've also never heard of a BRDF-weighted distribution, but that's probably because it's not a good idea tongue.png


I tested it out on a calculator, using a cosine-weighted distribution as a test, and the PDF seemed kinda close to what it should be. I also tried it out on a pathtracer I have laying around, and it seems like it works, but then again, I don't really trust my eyes with this stuff anymore.


So is this a correct method of generating a BRDF-weighted ray? Or is it more complicated that I think it is?


Thanks! smile.png

Seeing what GD.net article someone's written

21 February 2015 - 11:43 PM

Heyy all!


I was wondering if there's an easier way of seeing what GD.net article a forum member has written. I see Contributor tags all the time, and out of curiosity I wonder what sort of articles they've written, or if they've written anything new since I last saw.


For example, just to pick on someone, a couple days ago I remembered @frob's articles on data structures, and I wanted to see if he had written any more. The way I normally do that is I go to their page, click Find Content, then click Resources, which isn't very intunitive. Is there a better (easier/more intuitive) way to find them?


As a different, somewhat related question, is there an RSS feed for all articles? I know there are separate ones for each section they're posted under, but I haven't found a master feed.

Vector-based vs Tile-based worlds

09 February 2015 - 11:18 AM

Heyy all!


I'm starting to think that a vector-based world would make the game I'm creating look and feel magnitudes better. I'm going to try it out regardless, but I have bad memories of the last time I tried to make a non-tile based 2D platformer. It wasn't exactly a pleasant experience, so I'd like to know what you guys think.


For reference, here's an older video of the type of game I'm making:


Here's what I mean by vector-based worlds: http://higherorderfun.com/blog/wp-content/uploads/2012/05/Talbot_Bitmask_2.png. You know, games that don't just use tiles to block out the game world.


So, without a shadow of a doubt, tile based worlds are much easier to make, and much easier iterate upon. Unfortunately, they don't look as gorgeous as fully vector-ized ones. Tile-based editors like Tiled have functionality that allows you to create tile objects (like a platform), give them properties (like following a path or using a lua script), and other things that can make the process go a lot more smoothly.


So, last time I tried making a game without tiles or bricks, I used Illustrator to make the files, exported them as .SVGs, and parsed those to load game levels. Because there's no properties, I had to resort to naming layers to denote what objects had which properties (as names in SVG files could be easily parsed). But it all felt very hack-y, you know?


What are your experiences with vector-based game worlds? Did you ever find a good solution to the issues of:

  • a good vector-art program that's both fully SVG compliant (or compliant enough to relieve you of headaches, unlike Illustrator) and actually nice to work with?
  • world content generation (like procedural generating grass, flowers, etc to reduce the art generation burden)
  • texturing (because SVG doesn't really like texturing things, and especially doesn't like smooth transitioning texturing between different textures)
  • attaching scripts or other properties to different objects (like platforms, doors, enemies, etc)
  • ease of prototyping (cause let's face it, tile-based worlds are easy because, just looking at it, you can see whether or not it'll work. With vector-based overworlds, not as much)
  • other things I'm forgetting or haven't realized yet? tongue.png

Thanks in advance for the advice!