Archived

This topic is now archived and is closed to further replies.

new and delete operators

This topic is 5330 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

ahhhh... dont.. never.. even think about that... if you know that you need thost 500 floats all the time, then why on earth would you want to allocate and free the memory 30-60 times per second? even if it was fast it would be a complete waste of precious cycles

in short: no, they are not and usually you should avoid allocating/freeing memory at runtime

Share this post


Link to post
Share on other sites
Although what each of the above posts says is essentially correct (allocating mem every frame is a bad idea) the chances are that you would not get much impact, if any.

Share this post


Link to post
Share on other sites
Use dynamic memory allocation *VERY* carefully. There is cases where you dont have choices...if you go with a oop approch then you will often face situations when you need to create a new object with the new operator...but try to make it so that there is as few possible memory allocation during run time.

Share this post


Link to post
Share on other sites
What''s wrong with dynamic memory allocation??? Does anyone have specific performance-impact information, or is this thread just mostly wind?

It is a GOOD thing, just use it wisely (like everything else!). I understand (and agree) you want to minimize the amount of code that is run every single frame, but why are some of you saying to avoid allocation at runtime in general? Why???? You want to hardcode all your array lengths? You know how many items the user wants/needs and how many files they will use?

You should NOT avoid dynamic mem allocation, you SHOULD use it.

Share this post


Link to post
Share on other sites
quote:
Original post by BriTeg
What''s wrong with dynamic memory allocation??? Does anyone have specific performance-impact information, or is this thread just mostly wind?


i think with nearly all sources for game development telling you not to allocate/free memory at runtime if you dont have to its more or less common knowledge. i havent profiled it as such, but i can tell you that a list will be a lot slower than an array if you add and remove items a lot. so instead of creating and deleting hundreds of items its definitely faster to sacrifice the memory and keep an array of a maximum size. though yes, that would be completely pointless if its for savegames in a load/save dialog or other not so critical sections.

quote:

You should NOT avoid dynamic mem allocation, you SHOULD use it.


sooner or later everybody will come to the point where he wonders if its better to have a list of bullets or an array for 1000 bullets with just a fraction of them active most of the time.

Share this post


Link to post
Share on other sites
Its not so much running new/delete at runtime thats the problem, its running said operations dueing the main game loop which can be an issue.

Its not so much a speed thing (althought that is important) but constantly calling new and delete can lead to memory fragmentation which over the lifetime of the gameloop can start slowing up new/delete calls as the OS heap manager has to do more work to grab your requested memory.

Also, as was mentioned, if your making and deleting 500 floats a frame why not make once and leave until end of game/lvl change and then clean up? Saves alot of cycle waste that way which could be better used for other things.

Finaly, I belive its Game Programming Gems 1 which gives a nice tip of handling new/deletes yourself via your own internal heap (in fact, i belive all 3 books in the series have articals on memory allocation and how to avoid expensive new/delete calls to the OS) so that you grab a large chunk of memory and allocate from that via pointers, this is nice and fast AND lets you avoid memory fragmentation.
Also, for stuff like particals its often better to have an alive/dead list so that you dont have to keep new/deleting stuff for them (this also has a side benifit that you can limit the amount of particals on a system via the setup to stop the partical system killing the machine)

I feel i''ve waffled enuff

Share this post


Link to post
Share on other sites
Again, I agree that minimizing your main game loop is a Good Thing. Of course having a permanent array of 500 floats is better than allocating and freeing them every single frame. And allocating large chunks less often can be better (but a little more complicated) than allocating small chunks more often. But people see "avoid dynamic allocation" and think it means *all* the time, that new and delete are some sort of evil feature. They''re just tools, like everything else, and you should use your tools properly. Overall general design of your main game loop is WAY WAY WAY more important than using a permanent array instead of new/delete.

For a simple test, I took NeHe''s lesson 7 (the rotating texture mapped cube), and added code to allocate and then free (using new/delete) an array of 500 floats right before calling DrawGLScene(). Yes, there was a performance hit, but it was surprisingly small: 0.2% of the total CPU usage. Roughly about the same as the call to PeekMessage which is called every frame as well. And I did not notice any drop in frame rate. In fact, I had add a loop to allocate and free the array *80 times every frame* before the CPU usage of the alloc/free stuff matched that of the DrawGLScene function (which simply draws a single texture mapped cube). And even here, overall FPS wasn''t really affected. And this was in debug mode, with all the overhead that adds - doing the same test in release mode, I had to alloc/free the array *500 times* every frame before it matched a single call to DrawGLScene()!

Now, I realize these numbers will vary from machine to machine, but it should help illustrate that use of new and delete are not performance killers. If you can optimize them away, of course go ahead - but if your main game loop is taking too much time, your main culprit is your overall design, not a few extra calls to new/delete.

Brian

Share this post


Link to post
Share on other sites
A new, even more surprising test, that shows that new/delete are your friends, even *inside* your main game loop:

A while ago, I wrote a Matrix screen saver, using DirectDraw. I spend most of my time on a highly optimized design, which the numbers below will show. I already had a frame counter so I could display FPS, so today I simply added a 'new' counter as well. (ie. I put "newcounter++;" right after *every* call to "new" in my code (the vast majority are allocating a single 32-byte struct), and then every second, I saved the value for displaying and reset the counter to 0). I then display:

(# of calls to 'new' during the last second)
divided by (# of frames during the last second)
equals (avg. number of calls to 'new' per frame).

I then ran my saver as fast as it would go (I took out the delay I had built in). My results on a P4 2.4 GHz running at 1280x1024x32:

80,000 calls *per second* to 'new'
divided by 170 frames per second
equals ***470 calls to 'new' per frame***

Trying different resolutions and different settings (drop length, number of droplings, etc.), I was able to push this up to about 600 calls to 'new' per frame, and still keep the 170 fps.

Crikey! I new I used new/delete a lot, but even I was surprised. Obviously, new/delete are not a problem. I'm 100% confident that I'm taking a performance hit by my numerous calls to new/delete instead of further optimizing my design, but with 170 fps at 1280x1024, do you think I care? :D

BTW, download it at http://www.tegarttech.com/apps/codex31b_test.zip and post your specs/results, I'd be interested in seeing them. Before running the test, go to the configuration dialog and set the "Drop Speed" to the max (100) and the "Max Droplings" to 200 (that's what I used in my test), so the saver is really busy. (It normally runs at a speed of 75 and "Sleep"s between frames, which approximates the "official" Matrix speed). Then, when you preview the saver in full screen mode, press 'f' to turn on/off the fps counter. It will display 3 numbers in the top left: the first is calls to 'new' in the last second, the second is the fps, and the third is the numbers divided, giving average number of calls to 'new' per frame.

Brian

[edited by - BriTeg on May 9, 2003 5:38:53 PM]

[edited by - BriTeg on May 9, 2003 5:42:33 PM]

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
quote:
Original post by Trienco
i think with nearly all sources for game development telling you not to allocate/free memory at runtime if you dont have to its more or less common knowledge. i havent profiled it as such, but i can tell you that a list will be a lot slower than an array if you add and remove items a lot. so instead of creating and deleting hundreds of items its definitely faster to sacrifice the memory and keep an array of a maximum size. though yes, that would be completely pointless if its for savegames in a load/save dialog or other not so critical sections.
I don''t know what sources you''re talking about, but your view is close-minded and impractical. What do you do when you''re limited on memory? What do you do when you don''t know how many of a particular object you need? What do you do when you have only 10000 objects of one time in one part of the game, and only 10000 objects of another type in a different part? The biggest problem with PC programmers is their laziness and "memory is unlimited and free!" attitude. Effective dynamic memory managment is the "right" way to go.

quote:
sooner or later everybody will come to the point where he wonders if its better to have a list of bullets or an array for 1000 bullets with just a fraction of them active most of the time.
And hopefully, they''ll make the right choice (the list). (and here''s a hint as to why: It''s the implementation that matters.)

Share this post


Link to post
Share on other sites
and the guy above who said Arrays are better than Lists if you are inserting or removing a lot is just plain BACKWARDS ... arrays are ideal for getting and setting a lot, without changing size ... as soon as size is changing, then linked lists blow the pants off of them ...

Share this post


Link to post
Share on other sites
The answer is STL. Create a container. If you think you will need 500 in an extreme circumstance (but hardly ever) reserve at least 300 for the container (or some other average). This will save *some* memory and will still not be bogged down with constant memory allocation. Although I think only vector''s can reserve (and I''m almost certain lists can''t).

Share this post


Link to post
Share on other sites
also, allocating 500 floats in ONE CALL ... is no big deal ... it will be plenty efficient ... if you call new 500 times, 1 for EACH FLOAT then oyu will take a real performance hit ... as evidenced by the post saying he could get 400-600 news per frame, 170 frames per second on a P4 2.4 GHz ... your game should run at least 30-60 FPS on a Pentium 233 ... if a good enough video card is present ... so you shouldn''t be calling new/delete more than 5-50 times a frame or so - TOTAL, for all possible system needs ...

Share this post


Link to post
Share on other sites
of course, this whole thing depends on the situation, if you are writing a small program which rotates a cube or something simple then 100s of calls to new/delete per frame are not gonna make much difference simply coz your not taxing the cpu with the rest of the code.

But, if you get to the stage where you are running complex simulations + user interaction + AI + anything else thats required those ''harmless'' calls to new/delete can soon add up.

As with all things its a matter of situation, for alot of things static memory generation at start up is the best solution (example: particals, terrain maps etc), where as for minor things you can get away with new/deleting in the game loop without effecting things.

Share this post


Link to post
Share on other sites
those sources are for examples posts on here from experienced programmers (at least i consider everyone who finished a commercial game as such) and books. why would professional game programmers waste a lot of time to write their own memory management, if new and delete are so harmless?

i just wish you had added the source for the saver. you advice to up the number of droplets, which i expect will increase the time it takes to render and if your hitting your fillrate limit that hard, its no surprise you have a lot of spare time. now let your saver do something more complex requiring a lot of cpu time (ai, complex physics) so your cpu can become the bottle neck. im sure you will feel the allocations a lot more.

my terrain engine has enough spare time for a few hundred thousand square roots per frame.. that doesnt mean "yes, use square roots, they make the calculations more correct and wont confuse others by squaring everything else instead".

"The biggest problem with PC programmers is their laziness and "memory is unlimited and free!" attitude. Effective dynamic memory managment is the "right" way to go."

is this the same big problem like pc programmers with their "computers are so fast anyway" attitude, that results in "dumb" shooter games requiring 2ghz machines even with lousy ai and cheap physics?

if you consider it narrow minded to have a look at both (cpu and memory) instead of saying "f**k , we got enough of it", yes, im narrow minded. if you think its stupid to use an array of 5000 pointers instead of a list of 100-4000 pointers, when rebuilding this list every frame (collecting visible objects) completely kills your frame rate then yes, im stupid.

but at least im clever enough to remove objects from an array by copying the last element over it (or in case of large objects use pointers). and even if you dont bother because stl can do the same for you, at least you should realize when and why its a good idea to do so.

Share this post


Link to post
Share on other sites
quote:
Original post by Trienco
those sources are for examples posts on here from experienced programmers (at least i consider everyone who finished a commercial game as such) and books.



How do you know who here is and isn''t an experience programmer? For the last 10 years I have been receiving a salary to develop software in C and C++.

quote:

why would professional game programmers waste a lot of time to write their own memory management, if new and delete are so harmless?



Because there''s always a better way to do something. It doens''t mean the previous way is evil. BTW, I suspect at least 80% of the people on these forums wouldn''t be able to write their own memory management, and half of the people who could, wouldn''t see much performance improvement.

quote:

now let your saver do something more complex requiring a lot of cpu time (ai, complex physics) so your cpu can become the bottle neck. im sure you will feel the allocations a lot more.



Maybe. And if that started to happen, I would simply run a profiler on the code, find the bottle necks, and change the design appropriately. The point is, the first several posts in this thread all said "never" use dynamic allocation at run time! The original poster asked about *a single* call to "new" each frame, and people were condeming that. I simply demonstrated that calling "new" 500 times per frame has approximately the same impact as rendering a *single cube*. Now surely, if your app was to "do something more complex requiring a lot of cpu time (ai, complex physics) so your cpu can become the bottle neck", having your code render one less cube per frame wouldn''t have much difference, now would it? So who cares about a few calls to "new" when a few *hundred* calls to "new" have the same impact? And also, if doing something more complex (AI, complex physics) along with my many calls to "new" caused the frame rate to drop significantly, I''d bet my grandma''s cookie recipe that running a profiler would show that the AI and complex physics would by far be the first place to direct your optimization efforts.

quote:

my terrain engine has enough spare time for a few hundred thousand square roots per frame.. that doesnt mean "yes, use square roots, they make the calculations more correct and wont confuse others by squaring everything else instead".



Why not? If you have the spare time between frames, you don''t want more accuracy and more readable code? Have you tried putting in square roots, then running a profiler? If so, and if square roots were a dominate resource hog, then by all means optimize them away. But if they are 20th on the list of CPU munchers, focus your optimization effort on the top 5.

quote:

"The biggest problem with PC programmers is their laziness and "memory is unlimited and free!" attitude. Effective dynamic memory managment is the "right" way to go."

is this the same big problem like pc programmers with their "computers are so fast anyway" attitude, that results in "dumb" shooter games requiring 2ghz machines even with lousy ai and cheap physics?



Again, it''s not "lazy" to not optimize every single detail of your code. You have be efficient in the amount of time spent developing your code as well. 1. Come up with what looks like a half-decent design, 2. develop the code, 3. profile the code, 4. optimize the top 5 performace hogs. Repeat steps 3 and 4 until the point where the benefit you get from your app (usually measured in cash) versus how much effort you''re putting in (usually measured in time) doesn''t make the project worthless. Yes, I could spend some time and optimize my screen saver much more. But why?

quote:

if you think its stupid to use an array of 5000 pointers instead of a list of 100-4000 pointers, when rebuilding this list every frame (collecting visible objects) completely kills your frame rate then yes, im stupid.



The point is, "rebuilding this list every frame" does NOT "completely kill your frame rate". We''ve all said that if it does, of course fix it! But run your profiler!!! It is your friend! Why fix the 20th-worst performance killer, that eats say 3% of the CPU instead of 1%, and leave the top 5 monsters? Once your memory managment system is in the top 5, *then* focus on it. Using a half-dozen calls to "new" each frame to maintain a list of 100-4000 pointers instead of maintaing a static array of 5000 DOESN''T MATTER, won''t kill your frame rate. Try it. Profile it. Believe it.

Share this post


Link to post
Share on other sites
I''d like to see anyone get away with not using dynamic allocation at run time, in fact I''ve never seen it said dont do it and anyone who says otherwise is frankly crazy.

The key is not the ''runtime'' allocation, thats good, but the allocation/deallocation PER FRAME which is the issue.

Most of the time you have no reason to allocate/deallocate per frame, most things can be pre-allocated on map/lvl start up (this includes any partical systems) leaving only minor allocations per frame and even then i doubt you''ll do it EVERY frame (example: player fires a rocket, you make a new instance of that rocket, the rocket is gonna live for a few frames and you wont fire another one for whatever the delay is on rocket firing so it could be a good 2 seconds before you need to allocate again, and maybe by then you can reuse the memory).

The key points against constantly allocating/deallocating isnt so much the time (althought as said before doing lots of allocations/deallocations per frame is wastefull of CPU cycles and tbh the cost of 500 floats vs wasted allocation time probably comes out in favour of a static array of 500 floats even if the difference is just 1 cube) but is related to memory fragmentation which can accure due to allocating and deallocating different sized objects over time, thus over the life time of a map this _could_ become an issue if the heap manager of the host OS has to shuffle things around to accomidate your request.

As an aside, there must be some good in pre-allocating more memory than you need or the STL wouldnt do so in its vector container for example

Share this post


Link to post
Share on other sites
ok, runtime was of course a bad mistake when i meant the actual game (excluding menus, loading screens, etc.)

concerning fragmentation a much more interesting test would be a few dozen allocations of random size and freeing them at random times in a random order.

call me control freak, but i feel uncomfortable if i cant predict if and when something bad will happen so even if the chance is less than 1% that memory will look like swiss cheese after a few minutes with holes too small for some allocations and cause the hdd led to light up more frequently than the muzzle of your rendered gun i wouldnt take that chance.

its not that avoiding it is a hardcore optimization that requires hours of work.

and about rebuilding that list: i finally wanted to add a tree for culling instead of brute force testing all patches.

approach 1: by the book recursion, not even half as fast as brute force
approach 2: queue instead of recursion, faster but not even close to brute force (probably not just because of creating and deleting thousands of elements each frame, but also because it made the cache quite useless)
approach 3: fixed size array, finally at least as fast as brute force (due to tree traversing and longer tests that couldnt be helped for less than about 4500 patches)

so, YES it did kill the frame rate. of course you can say "who cares, if you had it running at 80fps and your fillrate was the limit anyway".. but the difference is that now it wont require more than 1ghz but will run about as fast with maybe 300mhz as long as theres a decent graphics card.

that doesnt mean you should have every variable you might ever need as static or global (though yes, in d3d i keep a static temp matrix around).. but if you know that you will need something all the time or even every frame why not allocate it once and be done with it? not even the "pc programmers think memory is cheap"-argument will work, because you need that memory anyway, no matter if you allocate and free it all the time or not.

also: if you allocate it all when you load a level you can be sure its there and will "fit" into memory.. instead of walking round a corner and find yourself waiting for windows to free up some more memory (talking about that: there are far too many games that seem to have problems with that causing major slowdowns whenever something -a texture for example- is appearing for the first time)

Share this post


Link to post
Share on other sites
I''m not convinced memory fragmentation is going to be that big of a concern. If you''re allocating and freeing the same type (size, pattern) of mem every frame, the holes you leave behind after a frame are going to be the exact same holes you fill next frame, so your overall memory image will stay pretty much intact. Memory managers usually aren''t written by complete bozos.

quote:

its not that avoiding it is a hardcore optimization that requires hours of work.



I agree. No one is saying to always use mem allocation instead of static arrays. I''m just saying the performance hit of using mem allocation is miniscule, and thus it becomes a non-issue compared to where you *should* be spending your optimization effort.

quote:

so, YES it did kill the frame rate. of course you can say "who cares, if you had it running at 80fps and your fillrate was the limit anyway".



AGAIN, if you *profile* your code and discover that excessive mem allocations are in the top 5 of your CPU usage, by all means fix it. But if you really have a culling tree that requires "creating and deleting thousands of elements each frame", I''d question the design of the tree in the first place - I''d bet there are other places that needed optimization more urgently than the calls to new/delete. Doing thousands of anything each frame is a potential for optimization, not just mem allocation.

quote:

but if you know that you will need something all the time or even every frame why not allocate it once and be done with it?



Sure. But it''s the programmer''s call, whether he thinks the mem allocation or reserving mem you don''t need is the lesser of two evils. It depends on the programmer''s style, and the design of the app. It''s a trade-off. It''s wrong to just assume that you should never use dynamic allocation in your main loop. That assumption is based on another assumption that calling ''new'' is expensive - as evidenced by the initial replies in this thread. That assumption is wrong - as evidenced by actually profiling real code.

quote:

if you allocate it all when you load a level you can be sure its there and will "fit" into memory



Agreed. But a level is a case where you know beforehand what mem requirements you''ll need. That''s not always the case.

Share this post


Link to post
Share on other sites
quote:
Original post by BriTeg
I''m not convinced memory fragmentation is going to be that big of a concern. If you''re allocating and freeing the same type (size, pattern) of mem every frame, the holes you leave behind after a frame are going to be the exact same holes you fill next frame, so your overall memory image will stay pretty much intact. Memory managers usually aren''t written by complete bozos.



that situation shouldnt be a problem. if you leave your frame the way you found it nothing can happen. but imagine creating a lot of projectiles which live for different times. from time to time items are spawned and sooner or later you cant tell when something will be allocated and freed again. its not too likely but possible, that frequently allocated smaller objects leave more and more holes while larger objects will have to go to "higher and higher" memories to find enough space. not too likely, but a situation where i feel that i lost control and potentially something could go wrong. solving problems that might or might not appear in very special situations might not be affordable, but especially when dealing with software from certain companies i wish they would have at least thought about the obvious problems.

quote:

...and thus it becomes a non-issue compared to where you *should* be spending your optimization effort.



as soon as it takes more than 5 seconds or you have to think about which is better... yes, forget it, use the way thats easier to do and come back later if you have to. but especially the good old question if list or array for projectiles is one, where everybody makes his decision and mine was to sacrifice 100kb in favour of cache and less work. particle systems might be a very similiar problem.

quote:

AGAIN, if you *profile* your code and discover that excessive mem allocations are in the top 5 of your CPU usage, by all means fix it. But if you really have a culling tree that requires "creating and deleting thousands of elements each frame", I''d question the design of the tree in the first place - I''d bet there are other places that needed optimization more urgently than the calls to new/delete. Doing thousands of anything each frame is a potential for optimization, not just mem allocation.



the tree itself was a typical tree.. 4 children, bbox, pointer to patch... the problem was traversing it, as recursion had horrible overhead. using a list and iteration instead was better, but resulted in constant adding and removing nodes. reserving space for the list would have worked of course, but then it wouldnt have been too different from the array i ended up with anyway.
if i remember the numbers that was 30%, 70% and 99.9% of the speed i had with brute force. so by now im really careful about trees if i dont have a very high number of leaves.

that was by far the most useless optimization so far. culling was the main problem (as the app wasnt doing much else on the cpu anyway), but some things cant be optimized further (ok, i didnt care to lay hands on the assembler output).

quote:
That assumption is based on another assumption that calling ''new'' is expensive - as evidenced by the initial replies in this thread. That assumption is wrong - as evidenced by actually profiling real code.


an assumption that msvc seemed to prove. with 500 new/delete calls i had about 4ms for that which would be about 25% of the time i can spend on one frame. of course not running the app from within vc++ and compiling with the right options reduced that 0.1ms. so i admit that the time isnt as much of a problem as i thought. fragmentation still might be, the more the less memory your system has.

quote:

Agreed. But a level is a case where you know beforehand what mem requirements you''ll need. That''s not always the case.


not always, but you can often estimate. assume youre fasted "entity spawning" object creates 100/s (and pretend its a shooter with 32 players max... for mmorpg thats pointless but they usually arent based on fast gameplay anyway).. reserving 3200 projectiles would be enough. if of course you would have to reserve 10mb if you usually dont need more than 100kb forget about it. but again thats my personal preference.. having everything in its place and feeling in control (at least until i write behind an array boundary and spend an eternity to find out why a value suddenly changed into nonsense).

and thinking about the example above there would be more urgent problems i think. like not firing 100 bullets if you have less then 100fps and additional processing to create the missing ones later on and calculate their correct position. or in case of hitscan.. ouch, couldnt even fix it without potentially dealing damage to someone whos long gone. alright, dont spend too much on memory issues. concerning the initial post and if its 500 floats every single frame i would definitely either add a static or dont call delete before the end. even if its just to feel better and because no matter how much work it really is it seems to be useless work.

Share this post


Link to post
Share on other sites
quote:
Original post by Trienco
but imagine creating a lot of projectiles which live for different times. from time to time items are spawned and sooner or later you cant tell when something will be allocated and freed again. its not too likely but possible, that frequently allocated smaller objects leave more and more holes while larger objects will have to go to "higher and higher" memories to find enough space.



I''m still not convinced. What is a lot of projectiles? 20? 50? 500? You''re allocating/deallocating memory within only a few kilobytes, max. Hardly enough to create swiss chess out of your RAM. I guess until someone can provide an example with real data and a real memory frag image, this is all speculation on both sides.

quote:

not too likely, but a situation where i feel that i lost control and potentially something could go wrong.



I don''t understand what you mean about loosing control.

quote:

but especially the good old question if list or array for projectiles is one, where everybody makes his decision and mine was to sacrifice 100kb in favour of cache and less work. particle systems might be a very similiar problem.



Again, your decision was a good one. That doesn''t mean it''s bad if someone else chose to call ''new'' for each projectile or particle.

BTW, I ended up changing my saver, and it only took me 10 minutes to change from calling ''new'' several hundred times per frame to using a static array. I now only call ''new'' once per frame. It *is* a cleaner, easier way to do it, but now it sets aside a large chunk of memory it usually doesn''t fully use. The fps didn''t change at all, but yes I feel a little more confident that because the design is a little simpler now, there''s less chance for a bug in my mem handling.

Share this post


Link to post
Share on other sites
quote:
Original post by BriTeg
I''m still not convinced. What is a lot of projectiles? 20? 50? 500? You''re allocating/deallocating memory within only a few kilobytes, max. Hardly enough to create swiss chess out of your RAM. I guess until someone can provide an example with real data and a real memory frag image, this is all speculation on both sides.



maybe im a little radical, i had 500 ships with 2 guns each firing 5 shots per second and living for a few seconds. so i''d estimate about 25000 projectiles max and maybe a third used. though i remember i was using a list too (with the constructor and destructor adding and removing a projectile from the list.. dont know if i will ever have code again thats supposed to look like new Projectile(ShipIter->Pos); and then just forget about it *g*). back then the ai for all the ships was the bigger problem.
i think it should have been safe, as all projectiles were the same size a new one could always fill the gap. but i cant forsee that with objects of different sizes and thats what i mean with loosing control (or least feeling like i do).. i cant tell if there''s a situation where things go wrong.

quote:

BTW, I ended up changing my saver, and it only took me 10 minutes to change from calling ''new'' several hundred times per frame to using a static array. I now only call ''new'' once per frame. It *is* a cleaner, easier way to do it, but now it sets aside a large chunk of memory it usually doesn''t fully use. The fps didn''t change at all, but yes I feel a little more confident that because the design is a little simpler now, there''s less chance for a bug in my mem handling.


hehe.. see? thats what i mean with at least to feel a little better even if its not making much difference. it feels a little more neat and tidy ,-) (as long as doing that wont waste several 100kb or more.. though having a look at some games i dont think anybody would care about wasting several mb today *fg*)

Share this post


Link to post
Share on other sites
If you guys want to run your games on a 486Dx2, ok, thats can make a dif.. But now, you have 1500Mhz?? 3000Mhz??? Gf4 ?? ATi 9700 !?? 25000 or 50000 or 100000 projectile with new and del will never make your FPS slow down!!(if you dont make the new at all the same time :o)

I always use link list and I think its the best way.

Share this post


Link to post
Share on other sites