Jump to content
  • Advertisement
Sign in to follow this  
vazix

Estimating a physics engine's performance

This topic is 1666 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello,

I have visited gamedev forums a while ago with different epic game ideas and concepts. With my current understanding of programming I realize how complex can even a simple card game get with different interactions between players, cards and so on...

Since I am a physics student and a fan of 2D Worms games, I tried my luck at creating a physics engine prototype, with which I would hopefully make everything (besides characters,weapons...) out of tiny particles, thus destructible and uniform.

My current working demo using javascript and canvas.

If I risk of having some simulation bugs and weird particle behaviours, I could get almost 10 000 particles to be simulated at 30 fps. By fiddling with the speed of simulation and other variables I was able to achieve some sort of soft bodies that would break appart if they hit the ground with a too great speed.

 

The biggest obstacle to my goal is uncompressable fluids and solid objects. For example: if a solid box hits the ground, I would want it to stop almost instantly. My way of doing the simulation is to calculate only the interactions between nearby particles (usually one to six) per one interation. If my box is 100px tall and I want it to stop moving in 5 frames, I would have to do at least 20 iterations per frame to have it (the top part) compleatly stopped if the gaps between the particles is one pixel.

This rought benchmark demo tells me that only 5 000 000 simple calculation cycles can be executed by javascript per one frame, which is a lot less than I would need to simulate 2048*1024 particles (my goal for something like full HD playing world) with the previously mentioned fast responses. My reasons to have everything made out of particles are more game-play specific, but I am also keen on finding out if its at all possible.

 

I am looking into OpenCl to ("port") create my physics engine in. I am starting to get familliar with the kernels and all that good stuff, but before moving on I would like to know if there is a good way to estimate a worst case scnario performance of my game engine. Is it possible to determine if my engine would be possible with just a base knowledge of how many calculations I will need to carry out? Do I need to create a full physics engine prototype just to know where the unexpected bottlenecks appear?

 

I can give more details on how my engine works (its all in the webpage's source, as unique and precious as I think it is at the moment) and why I think it can be very easily parallelised, but my question is dragging already.

P.S.   I would gladly discuss the physics of my engine (or the engine of my physics) more in depth, not sure where to do so.

Share this post


Link to post
Share on other sites
Advertisement

but before moving on I would like to know if there is a good way to estimate a worst case scnario performance of my game engine. Is it possible to determine if my engine would be possible with just a base knowledge of how many calculations I will need to carry out? Do I need to create a full physics engine prototype just to know where the unexpected bottlenecks appear?


Not really, outside of extreme cases. Obviously you want to make an attempt at figuring out the computational complexity of your algorithms ("big O" notation stuff) and rethink any parts that are worse than polynomial time for the number of particles/shapes you're dealing with.

Aside from that, the only realistic thing to do is throw a stress test at your implementation: toss 3x to 10x as many objects as you expect it to handle and see how well it handles the load, then optimize and refactor and redesign until you get it handling that number of objects reasonably well without spikes.

Remember that for games, worst-case performance is more important than average-case most of the time. Having 60 FPS that drops to 1 FPS every 200 frames is much, much worse than just running at a steady 30 FPS.

Share this post


Link to post
Share on other sites

Is it feasible to create a physics engine that always performs near the worst case scenario?

In my code I divide the whole physics world into a grid and in each cell there can be only one particle (because of hte forces) and the particle has relative coordinates to the cell. Then to calculate the forces with nearby particles I just itterate over the nearby cells, first checking if they are filled. This approach sounded quite neat for me, because the coordinate system is related to array indexes and it is obvious which particles are close. Now in a worst case scenario almost every particle will have about 6 particles nearby, so almost every nearby cell will be filled.

 

In other words: is it beneficial to avoid unecessary calculations when there are a few particles if I am aiming to make the least calculations in the worst case scenario?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!