Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

FiveFootFreak

How to estimate code execution speed

This topic is 5152 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Greetings fellow devs, I was wondering if there is some kind of "quick and dirty" method for estimating how far you can push a processor using a given programming language. Say I want to make a game in which 1000 little soldiers battle another 1000 little soldiers. The program would have to calculate movement, targeting, shooting, etc. for every single soldier, and this would have to be done about 10 times per second. Now this would probably be no problem, but imagine there is also AI, sound and whatnot running in the background. Is there any way to tell if I would reach my performance requirements? I guess there are ways using complex math, but I''m not much into that...

Share this post


Link to post
Share on other sites
Advertisement
In general, there is no way to calculate the absolute speed of anything. There is the big-O metric for algorithms, and that is the best way to know speed of your code. Other than that, just start coding and profile and optimise your problem areas later on.

Share this post


Link to post
Share on other sites
That''s bad, because I''m at a junction with my current game design where this kind of speed issue is of paramount importance to the rest of the game

Share this post


Link to post
Share on other sites
You can estimate what you can do, but you need to relate that to what exactly you are doing with your engine. For example, you need large indoor levels...look into octrees, portals, etc... Outdoor terrain...look at LOD terrain algorithms. Just want to draw a massive number of models at once, do a search on the web for vertex fill rates for popular video cards.

The point is, this is a complex issue that needs to be related to exactly what you are trying to do.

Share this post


Link to post
Share on other sites
im using glowcodes as profiler for my stuff. it perfectly integrates in visual studio, shows the time spent in functions an the number of calls of it, average speed, which functions it calls, and total/single-run time it spends in tose functions beneath it. it also tracks memoryleaks etc.

I suggest you to test it, its available as a free trail version (25days, i think) here: http://www.codework.com/glowcode/product.html

Share this post


Link to post
Share on other sites
Yes, I realize this is a rather complex thing, but I thought that maybe someone out there has developed some ground-breaking method of getting quick estimates on problems like this.

Well, guess I''ll have to invest some extra time into prototyping then...

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
It all depends heavily on what kinds of algorithms/methods you choose to use. You can estimate the performance if you have prior experience of something similar, and have a hunch of how much more complex this situation is than the one you made earlier.

But I can tell you, I did a 2d game years ago which had about 100 small soldiers fighting 100 other soldiers and it ran fine. It wasn''t that well coded and the CPUs weren''t so good that day, so I''m sure 1000 vs 1000 fight is possible on today''s computers assuming you code it well.

If you have a very complex AI though, you can waste all your resources in just a dozen of soldiers! That is, for AI you can always use more CPU power. So be aware that you shouldn''t hope to get much more AI than a very simple state machine with minimal resource usage.

Share this post


Link to post
Share on other sites
ah, i thought you wanted a profiler, but this is a necessary tool to increase your performance. my first sorting system was some simple bullshit-sort, running damn slow. im working on improving performance for my radixsort implementation at the moment, and it sonts 10.000.000 unsigned int's in about 2 (1,986617) seconds in debugmode (no optimising like mmx-usage...). (i got p4 with 1,89GHz)

lots of good optimisation tips are offered right there: http://www.azillionmonkeys.com/qed/optimize.html

[edited by - fooman on June 9, 2004 10:34:45 AM]

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Oh, and for AI, precalculate as much as you can. Precalculate movement paths, precalculate safe spots in the environment and whatever you need. Even if the environment is somewhat dynamic, it''s faster to update some general influence map and the precalculated structures than to do precise calculation for each soldier about some dynamic aspect, because you have so many soldiers. Don''t do everything for every soldier, that''s pointless. Nearby soldiers can just share their AI, shoot the same target and move the same way (with some variation). That''s only natural and saves resources.

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!