Does it necessarily depend on the platform(console, pc, handheld), the optimizing algorithm is not efficient enough or just the amount of object currently on the screen is too much for the platform too handle?
as mentioned above the game can just do too much for the hardware (run slow all the time),or you can have momentary slowdowns during a slow operation, such as paging a resource off disk.
both of these can happen on any platform with any language and any tools. its all caused by you telling the computer to do more than it can do in the time allotted.
momentary slow operations can be split up over multiple frames (think streaming terrain chunks off disk) to spread out the work load over time to avoid the momentary stall.
although its possible to do simulation that slows things down, its usually trying to draw too much on the screen that's the culprit. in some cases it both: graphics plus some contribution from AI that's eating up all your processing power.
the key to improving frame rate, is to do less.
this can be done by:
1. optimization: doing things more efficiently.
2. pre-processing: doing things ahead of time (before release, at load time, etc).
3. splitting slow tasks over multiple frames (streaming terrain chunks, using multiple frames to calculate long A* paths, etc).
4. actually doing less. for example, in a FPS title, design levels so there's never too many bad guys active at once to draw on the screen at 60 fps. if your game engine can only draw 10 bad guys at 60fps, you design the levels so there's never more than say eight active at once. that way you guarantee you're never over-taxing the hardware.
How many factors is taken into consideration to improve the frame rate.
while there are multiple causes (slow CPU, slow hard drive, slow GPU, slow code), the only thing you really care about is elapsed time.
so the first tool in any code optimizer's bag of tricks, after simply inspecting the code for obvious inefficiencies, is a high resolution timer to measure the elapsed time of a section of code in milliseconds, microseconds, or frequency counter ticks.
slow hardware is a challenge to be worked around.
slow code is a problem that needs to be fixed.
simply trying to do to much is a design flaw that should have been identified by means of a rapid prototype to test the scope and requirements of the game vs the target platform's capabilities.
here's a good approach to use for optimization:
1. start with straight forward algos, unless you KNOW it'll be too slow.
2. if you think an algo may be too slow, test it first (rapid prototype). this can be as simple as a little test routine you add to the game to try out this or that to see if it runs fast enough.
3. always think in terms of efficiency (code that runs fast) when coding. try to avoid slow code in time critical areas. if you code with speed in mind, you tend to write more efficient code from the get-go and require less optimization later.
4. when the frame rate drops, break out the timers (or a profiler), and use a divide and conquer strategy to identify bottlenecks. with a profiler, you can time pretty much everything at once and instantly see where all your clock cycles go.
5. once you've identified bottlenecks, optimize that code. the goal is to reduce the time required, both to execute the code, and to move it and the data into and out of the CPU/GPU/RAM/hard drive. this is where things like "cache freindly code" come into play. As a highly specialized type of engineer, gamedevs, like any good engineer, will go to great lengths to attempt to find an engineering solution to the challenge of too much game and too little hardware. at the high end, optimizations can become rather complex, and can required a fair amount of R&D (research and development) work. if you can find it, Michael Abrash's lecture on the optimization of the quake engine at CGDC '96 is quite interesting - all the things they tried.