What improves better memory performance?

Started by
40 comments, last by lawnjelly 11 years, 7 months ago

[quote name='Radikalizm' timestamp='1346950016' post='4977265']
The memory is now unavailable for use by both your application and the operating system, and thus cannot be reassigned to another application for as long as the original application is running.

This is not really true on desktop operating systems. In practice, when some other process needs more memory than is currently available, the virtual memory system will stash a few of your memory pages to disk, and grant that physical memory to the other process. Obviously, this is all pretty transparent to both applications.

On a console or mobile device, there often isn't a virtual memory system, in which case your statement is correct.
[/quote]

Well yeah, but for simplicity's sake I thought it would be better not to include an explanation of how paging works as it wasn't needed to explain the basic idea of a memory leak :)

I gets all your texture budgets!

Advertisement

[quote name='Radikalizm' timestamp='1346950016' post='4977265']
The memory is now unavailable for use by both your application and the operating system, and thus cannot be reassigned to another application for as long as the original application is running.

This is not really true on desktop operating systems. [...relevant stuff...]
[/quote]
This is kind of splitting hairs. I think you're focusing more on the physical RAM (i.e. where your memory is), whereas Radikalizm I think was focusing more on the "big picture" of overall memory use (where you have a limited amount to work with on a computer, regardless of where that virtual memory physically is). So I'd say you're both right and both bring up good points.
[size=2][ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]
It's okay guys! It's all good! It's all good advice because Virtual memory is good to know when creating a game to work on Desktop; and different allocation way to handle of memory for a mobile and a console game. I think it'll be just best practice to ensure that I deallocate the memory - if I ever do port the engine to a console or phone. It's just good practice I think. Regardless of virtual or just physical - I think it just shows proper effective coding. Which I gotta get the ebook of Effective Coding on amazon. Currently, I'm aiming at effective and proficient coding techniques.
Game Engine's WIP Videos - http://www.youtube.com/sicgames88
SIC Games @ GitHub - https://github.com/SICGames?tab=repositories
Simple D2D1 Font Wrapper for D3D11 - https://github.com/SICGames/D2DFontX

where you have a limited amount to work with on a computer, regardless of where that virtual memory physically is

On your average 64-bit desktop with a 500+ GB harddrive, you could easily use several hundred gigabytes of memory - despite only having < 16 GB of physical RAM (you also may be able to allocate several terabytes of memory, provided you don't actually write any data to it).

My point is that you don't free memory on a desktop OS because you are worried about running out, you free memory to avoid the performance hit caused by unnecessary paging.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]


So, a rule of thumb is to consistantly make sure no memory leaks because it can cause performance slow down. Got it. Additional notes, keep proper heaps so there's less fragmentation in memory - got it.


"Got it" ... but do you understand the reason? If you're trying to learn, you should be able to explain why.

I actually don't think anyone said memory leaks cause a performance slow down. That's kind of a vague statement anyway. A more accurate, concrete statement would be that memory leaks can cause future allocations to fail because there will no longer be sufficient virtual memory left (although on a 64-bit machine, this may take "forever"). So your program will run out of memory and crash or lose functionality or end up in a corrupt state (depending on how well you handle the allocation failures). However, on the way there, it could likely cause performance problems if there is a lot of paging to disk.

Someone did say that lots and lots of small heap allocations (regardless of whether you're leaking the memory) could cause performance issues, and that is true. Allocating from the heap can be a *relatively* slow operation in c++. It's not typically something you should be worried about at this stage though.

It's definitely more than "best practice" to ensure that you deallocate memory. It's necessary if you are writing any quality code - something you'd want to ship and have other people use.
Hate to say it, but this where I think GC languages like Java and C# do quite well.

Basically rather than allocating and deallocating memory as they go along, they allocate the memory but then only deallocate at certain points and in which case they clean up a lot of memory in one pass which is ultimately quicker.

If you do look into memory pools, stuff can get quite complicated so sometimes it might be nice to leave it to the GC platform's memory pool.

Frankly I prefer the simplicity of smart pointers, but they are ultimately "inefficient".
By using C++.NET you can perhaps get best of both worlds, by using auto_handle<T> to get deterministic disposal of memory to implement patterns requiring RAII but also garbage collected memory using gcnew.
http://tinyurl.com/shewonyay - Thanks so much for those who voted on my GF's Competition Cosplay Entry for Cosplayzine. She won! I owe you all beers :)

Mutiny - Open-source C++ Unity re-implementation.
Defile of Eden 2 - FreeBSD and OpenBSD binaries of our latest game.
Phil_t, I can easily comprehend than what what you are saying. Yes, it is a necessasity in coding to ensure to deallocate memory regardless of what OS. I don't care if a person is running a HAL 9000 or whatever - still inside my mind it's proper coding effectiveness. Everyone is right due to their opinion based on experience. So, yes I "got it" and I comprehend. If I was a bit confused - I would ask more questions; right? That's what a person learning would naturally do, right?

Good job everyone for giving some handful advice. This is why you have I've gave you positive reputation points.
Game Engine's WIP Videos - http://www.youtube.com/sicgames88
SIC Games @ GitHub - https://github.com/SICGames?tab=repositories
Simple D2D1 Font Wrapper for D3D11 - https://github.com/SICGames/D2DFontX

Hate to say it, but this where I think GC languages like Java and C# do quite well.
GC's are absolutely horrible in games. The collection process itself is a cache-miss nightmare (traversing the object graph is basically random-access, making it worst-case for caches, making it memory bound, making it run at about 1/800th efficiency compared to regular CPU logic) and the amount of work it has to do is hidden from the programmer and unpredictable. Sometimes you'll just have a huge spike in your frame time because the GC decided to go on a garbage hunt that frame... So then if you're lucky, you can put time-limits on your GC so it doesn't blow your frame times, and explicitly call it an an appropriate moment... However, then if it needs to run for a long time and you've capped it, that means that memory is going to start filling up with garbage and you run out of memory! So now you're left to go and rewrite all your code so as to not produce any garbage, using persistent pools etc, just like you would have in C/C++ code that avoids new/delete anyway... and we come full circle to the advice of 'don't allocate memory'.

GC's are absolutely horrible in games.

For hard real time, it is definitely a problem. For soft real time, there are possibilities. It depends on whether a game is hard or soft real time. Action games, like shooters, that depend on a steady FPS and consistent repeatable responses may certainly be a problem.

I have a MMO RPG server designed to support 10000+ players. Being a RPG, rather than a shooter, lowers the requirements. The server is programmed in Go, which is entirely based on garbage collection. Doing tests for 1000 players shows a steady load of approximately 10% on the target machine. True, it is a synthetic test, and not a proof that it will work in a real situation. But it looks very promising so far. I should also admit that the design of the server has been influenced by the GC mechanism. That is, data is preferentially re-used instead of thrown away. But that is not different to usual C/C++.

I think the aversion against GC is a little too strong here and there. Especially from card core C/C++ programmers.
[size=2]Current project: Ephenation.
[size=2]Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/
Fixed size memory pools FTW. They are great.cool.png

The downsides are you have to (typically) know in advance how many of the objects maximum you will want worst case scenario. In addition the memory preallocated for the pool is not available for other uses.

Upsides are they are blazingly fast, constant time allocation and deallocation, there is no fragmentation, and provided you choose the maximums correctly your program CANNOT crash due to an allocation failure.

You can also implement your own heap with buckets but it's not something I'm a fan of.

You can also (in c++) override new and delete to keep track of your allocations. You can use different heaps / counters for different modules, and budget your memory between them. Very useful on consoles and limited memory devices. This can also report to you any memory leaks on closing, which module and which file they are from.

Other tricks are things like, when you load in a game level, load it as a binary file laid out in usable form in memory. Then fixup the pointers within it from offsets within the file to actual locations in memory. This gives you super fast loading, no fragmentation, and cache coherency. And of course level size etc is one of the biggest 'changables' within a game, so if you can isolate this down to one allocation, you shouldn't really need to do much else in the way of allocation. And even for this you can just pre-allocate a big chunk for the biggest level size, that's what I've tended to do on console-like environments.

Of course this is for game code, where stability and speed are paramount. For tools and apps I'll be a lot more lax, and use dynamic allocation etc (sometimes I don't even override new and delete, when I'm feeling like living life close to the edge laugh.png ).

It's also worth mentioning that there are some allocations you can't avoid, depending on the OS - API allocations such as directx and opengl. You can of course use pooling systems with your API resources too. In addition on consoles you can often completely avoid this problem by using a resource directly from memory as they may be UMA or give you more control over memory.

This topic is closed to new replies.

Advertisement