Estimating the performance of my application a priori?

Started by
7 comments, last by Kylotan 7 years, 6 months ago

I'm currently having some very vivid ideas for an application and I'm very confident that this time I come up with a fully fleshed out concept. The application is not exactly a game, but more of a simulator. To be more precise, it's simulates the evolution of small artificial entities, however with quite a lot of subsystems included, so there is a lot of stuff which can happen. However due to the various systems at work and calculations which have to be performed, I have to somehow get an estimate of whether or not the application I'm drafting has a good performance at reasonable settings. Of course the user self might deliberately choose settings which lead to a bad outcome in performance. This is something I don't want to hinder him in doing so. But I somehow need to get an estimate, of how the performance might behave when scaling up the simulation and what I probably could do as a designer to draft the subsystems in such a way that the performance is running at an optimum level.

Advertisement

This is taking premature optimization to a whole new level.

I dunno. Explain what you want to do and see if it requires any traveling-salesman solution kind.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

True that.

But in general if you're looking to simulate a whole lot of things, the more you can do to vectorize the calculations, the better off you are, and the time to think about that is now rather than when you've already committed to operations that can't be vectorized.

So instead of having an object for each with fields x, y, and z, and methods with lots of if statements that operate on the entities, and then looping through your millions of entities, try to have have each field x, y, and z be a vector or tensor, each entity be an index into those vectors, and have the decisions be masks (vectors of the same dimensionality where 1 is "perform the operation" and 0 is "don't perform it"). For example, say there's some rule in the simulation that when the local temperature is above a certain threshold, fifty percent of entities above a certain size immediately lose a quarter of its velocity. Instead of going "if temp > 100 and e.size > 0.5 and rand() > 0.5, then e.vel *= 0.75", have size and vel and rand be vectors (with length equal to the number of entities), and calculate "mask = temp > 100 & size > 0.5 & rand > 0.5; vel -= vel * 0.25 * mask". (That's a silly and unrealistic rule but it's to illustrate that even discrete decisions like "do this or not" can be treated as matrix operations.)

This doesn't have different big O behavior than the previous, but your math library can probably execute this much, much faster than anything you'd write as a loop, and it's even faster if you can get it running on the GPU.

But I somehow need to get an estimate, of how the performance might behave when scaling up the simulation
and what I probably could do as a designer to draft the subsystems in such a way that the performance is
running at an optimum level.


As a designer, you just say what kind of "performance" you require. Let the engineers figure out how to
do it. Be prepared to propose alternate scenarios if they tell you it can't be done.

-- Tom Sloper -- sloperama.com

This is taking premature optimization to a whole new level.

I know people often bring up that Knuth quote .. but in my experience while you need to beware the pitfalls, it can be a very good idea to try and think about an efficient approach to your whole problem from the start. There are many failed projects and businesses as a result of not taking the time to think carefully about the best way of solving a problem before diving in.

Alright, you probably won't get it right first time, and you typically will end up redoing it several times until you come up with your current 'best solution', but you can save yourself a lot of wasted time. And the big point the 'premature optimization' argument kind of misses is that in many business environments you might not even get the opportunity to refactor it (try explaining software development to a bunch of clueless money crunchers), you might only get one shot.

Typically you need to liaise with a programmer and get them to write some test stuff for the kind of calculations you will be doing on each unit, to get any idea of how many is practical, and design around that.

But other than that, your problem domain is a little vague for us to give specific recommendations. For instance, is this a scientific application? Are the results of each unit required to be correct and calculated, or can they be estimated? Is it running on one machine? Or multiple? GPU, multithreading etc etc.

They will be always some limitations, you just can't have several millions entities interacting with each other at the same time, usually what applications do is disabling the entities that doesn't impact too much the final result (are they far away the player, do they move slowly... ?) or pureling don't create them until they are needed. We have too few information about what you're willing to do to estimate anything.

But I somehow need to get an estimate, of how the performance might behave when scaling up the simulation and what I probably could do as a designer to draft the subsystems in such a way that the performance is running at an optimum level.

Simple.

Your simulation will model a number of things. Some will be easy to model, some more complex. Some will run fast, others won't. What you need to model will determine the algo used, which determines the speed. Programming knowledge will be required. Things will fall into two basic categories: potentially slow algos, and everything else. Rapid prototype the slow algos and get some execution times, then multiply by the number of entities you want to model in the long run, and you have your basic ballpark estimate of update time for the game. from there you can determine if you're already fast enough, or if some particular algo(s) need optimization or if some algo(s) are just too slow, and alternatives will need to be found.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

Add up how many megabytes of data you need to access per simulation timestep. Multiply by the framerate. Compare this to your RAM throughput. If your estimate of your simulation's data throughput is larger than your RAM's actual throughput, then yes, your sim is too complex (or the timestep is too low). Tweak it and try again.

simulates the evolution of small artificial entities

How many? That's your first, and by far the most important question.

a lot of stuff which can happen

The nest question is "what stuff"? Specify that, pass it to a coder with the quantity from the previous question, and you'll get an estimate.

This topic is closed to new replies.

Advertisement