Estimating a physics engine's performance

Started by
2 comments, last by vazix 9 years, 11 months ago

Hello,

I have visited gamedev forums a while ago with different epic game ideas and concepts. With my current understanding of programming I realize how complex can even a simple card game get with different interactions between players, cards and so on...

Since I am a physics student and a fan of 2D Worms games, I tried my luck at creating a physics engine prototype, with which I would hopefully make everything (besides characters,weapons...) out of tiny particles, thus destructible and uniform.

My current working demo using javascript and canvas.

If I risk of having some simulation bugs and weird particle behaviours, I could get almost 10 000 particles to be simulated at 30 fps. By fiddling with the speed of simulation and other variables I was able to achieve some sort of soft bodies that would break appart if they hit the ground with a too great speed.

The biggest obstacle to my goal is uncompressable fluids and solid objects. For example: if a solid box hits the ground, I would want it to stop almost instantly. My way of doing the simulation is to calculate only the interactions between nearby particles (usually one to six) per one interation. If my box is 100px tall and I want it to stop moving in 5 frames, I would have to do at least 20 iterations per frame to have it (the top part) compleatly stopped if the gaps between the particles is one pixel.

This rought benchmark demo tells me that only 5 000 000 simple calculation cycles can be executed by javascript per one frame, which is a lot less than I would need to simulate 2048*1024 particles (my goal for something like full HD playing world) with the previously mentioned fast responses. My reasons to have everything made out of particles are more game-play specific, but I am also keen on finding out if its at all possible.

I am looking into OpenCl to ("port") create my physics engine in. I am starting to get familliar with the kernels and all that good stuff, but before moving on I would like to know if there is a good way to estimate a worst case scnario performance of my game engine. Is it possible to determine if my engine would be possible with just a base knowledge of how many calculations I will need to carry out? Do I need to create a full physics engine prototype just to know where the unexpected bottlenecks appear?

I can give more details on how my engine works (its all in the webpage's source, as unique and precious as I think it is at the moment) and why I think it can be very easily parallelised, but my question is dragging already.

P.S. I would gladly discuss the physics of my engine (or the engine of my physics) more in depth, not sure where to do so.

Advertisement

Particle-based physics systems are cool biggrin.png

I did one back in the Half-Life 1 days to add vehicles to the game. Instead of having dense/uniform particles, mine were spread out to just the corners of the objects (or more added as needed to stop them driving through walls so much...). To keep the objects solid, I used "constraints", where particles can be linked together and told to remain a certain distance apart. After doing motion for each particle, they'd each iterate through their list of links. If the two particles in a link were too far apart, they were moved together, or if they were too close, they'd be moved apart. These 'links' meant that when the front of a vehicle collided with a wall, the back of the car could be instantly moved backwards, due to the front/back now being too close together.

You'll probably be quite interested in this new SIGGRAPH paper: http://blog.mmacklin.com/flex/

I wouldn't be surprised if the performance difference between OpenCL and Javascript was over 1000x laugh.png

But yeah, you'll probably have to get stuck in and do some tests/experiments to see what the performance will be like...

but before moving on I would like to know if there is a good way to estimate a worst case scnario performance of my game engine. Is it possible to determine if my engine would be possible with just a base knowledge of how many calculations I will need to carry out? Do I need to create a full physics engine prototype just to know where the unexpected bottlenecks appear?


Not really, outside of extreme cases. Obviously you want to make an attempt at figuring out the computational complexity of your algorithms ("big O" notation stuff) and rethink any parts that are worse than polynomial time for the number of particles/shapes you're dealing with.

Aside from that, the only realistic thing to do is throw a stress test at your implementation: toss 3x to 10x as many objects as you expect it to handle and see how well it handles the load, then optimize and refactor and redesign until you get it handling that number of objects reasonably well without spikes.

Remember that for games, worst-case performance is more important than average-case most of the time. Having 60 FPS that drops to 1 FPS every 200 frames is much, much worse than just running at a steady 30 FPS.

Sean Middleditch – Game Systems Engineer – Join my team!

Is it feasible to create a physics engine that always performs near the worst case scenario?

In my code I divide the whole physics world into a grid and in each cell there can be only one particle (because of hte forces) and the particle has relative coordinates to the cell. Then to calculate the forces with nearby particles I just itterate over the nearby cells, first checking if they are filled. This approach sounded quite neat for me, because the coordinate system is related to array indexes and it is obvious which particles are close. Now in a worst case scenario almost every particle will have about 6 particles nearby, so almost every nearby cell will be filled.

In other words: is it beneficial to avoid unecessary calculations when there are a few particles if I am aiming to make the least calculations in the worst case scenario?

This topic is closed to new replies.

Advertisement