taking at will space in memory code design

Started by
22 comments, last by Calin 1 year, 9 months ago

Is wasting memory when it can be avoided a bad thinking pattern?

Like for instance I have the option to keep a large array of things within a class (one array copy in each object instance) or keep it outside the class and access the data with functions when needed.

My project`s facebook page is “DreamLand Page”

Advertisement

Calin said:
Is wasting memory when it can be avoided a bad thinking pattern?

It depends. How much time pressure are you under? Under a capitalist worldview (prevalent in the games industry), shipping and making a profit is more important than efficiency.

“Bad” is relative to your values. You should define what your values are before we talk about what they imply is “bad."

Calin said:
Like for instance I have the option to keep a large array of things within a class (one array copy in each object instance) or keep it outside the class and access the data with functions when needed.

I think there's an assumption you've forgotten to state here. If the large array is genuinely per-instance, how does moving each object's array out of the object going to save memory? Are you talking about a case where the array has the same values for all objects? If that's the case you're thinking of, then yes, putting it in the class would be an unnecessary use of memory.

A pattern it seems every generation needs to learn is that there are three phases to software development:

  1. Make it work.
  2. Make it work well.
  3. Make it work fast/efficiently.

The problem is that items 2 and 3 can be done almost indefinitely. Very often the first one is adequate, sometimes requiring one or two iterations that are constrained by development time.

Variations of this have existed since the 1970s. Variations include “Do the simplest thing that could possibly work”, and the often misunderstood 1974 quote “Premature optimization is the root of all evil." The full quote and paragraphs around it apply here, too. Programmers waste enormous amount of time thinking and worrying about non-critical parts of the program, but it is absolutely essential that they invest in the critical parts of the program. That's probably the case here, too. Don't worry about small efficiencies, unless you know they're actually a big efficiency. Usually the only way to know is to write it once, then measure where the actual critical parts are.

And it isn't even unique to programming. “Perfect is the enemy of good" is a quote that dates back centuries, with variations found in Shakespeare and other old writings.

Yes, wasting memory on redundant copies of something is wasteful. However, storing something in memory rather than computing it or reloading it is a frequent time/space tradeoff, so sometimes keeping things in memory when not strictly necessary is a good thing.

Does it work? If so, don't touch it unless you really need to, spend your limited time in more important areas first.

Oberon Command, Frob: The reason I`m asking is because with the software running on the first generations of computers the developers had a very big constraint of keeping the memory footprint down because there was almost no memory available, they had to make the most out of every single byte. These days that constraint is gone, the question is do you still pretend like there is no memory.

On the other hand I do see your point frob: there are stages in development, the later ones make sure every single detail is at it`s place and that everything is keep in good order with no resources (cpu, memory usage) wasted.

My project`s facebook page is “DreamLand Page”

It isn't gone, it has shifted.

Know your hardware.

Are you working on a computer from 30 years ago, a computer with 640 kilobytes of memory? Are you working on a game console with 256 MB, or 5 GB, or 16 GB memory? Are you working on a PC with 64-bit address space and virtual memory up to the available disk size? Are you working on a microcontroller or small embedded system game with 2 KB memory? I've done them all, and each has their own constraints.

We still have hardware limitations, but on the target platform the constraints are so distant they don't feel like they exist. On the modern PC, an lone game developer is unlikely to ever bump against them. The sky is the limit. For most people that's effectively boundless, but it becomes a problem if you're trying to hit the moon. You've got many gigabytes of memory, processors that can hit 10+ teraflops, and graphics cards that can real-time raytrace scenes that are more detailed than can be captured by a physical photograph. They are limitations that can be hit, of course, but as an individual you're unlikely to approach them.

What's the amount of memory you're talking about? I've had so many optimization discussions over the years where programmers are arguing over kilobytes, sometimes arguing over individual bytes and their performance, while meanwhile artists are tossing around 4k PBR textures without a care, putting together models with 20+ megabytes of material textures and colorizing them at runtime, and because the engine is built to handle it they're not even a blip on the performance metrics. Know where the limits are for the hardware and software you're on. Often it isn't what you think.

It's important to know know when optimization matters and when it does not. An important question to ask yourself is, how many instances of this do I have?

If you have a 1024×1024 tile map, then your per-map-cell storage is going to be multiplied by a factor of 1M, which may matter for low-end devices but probably not for modern PCs. Try to keep per-map-cell data below 1KiB, but don't sweat the difference between one byte and two bytes.

If you have a 1024×1024×1024 voxel map, then your per-map-cell storage is going to be multiplied by a factor of 1G, which matters a lot, even on modern PCs. Keep your per-map-cell data at or below a single byte if at all possible.

If you just have a single local variable in your main game loop, then there is no multiplication factor at all. Unless the variable itself is hundreds of megabytes in size, ignore it.

@Calin There is no fixed rule for something like this. It's situational. How big is the array? How many copies will there be? One thing to remember is with newer computers cache is king. So keeping things tight in memory is often desired. Also jumping around wildly in memory should try to be minimized.

So looking at your specific case. You need to consider how much extra memory is being added. Will this cause cache misses? Or perhaps if you are accessing other data outside the array in the class, maybe it will do the reverse and be beneficial. You will likely need to benchmark it to get a good answer.

Just as an example of how important cache is, I wrote a test program a couple years back that accessed a large array of data in various ways. In all cases each item in the array was accessed once. In the first case I was running strait through the array. In a second case I was accessing it in random order through an array of pointers, but that array I was going through sequentially (I had randomized it up front). Finally I accessed the data through a linked list I had included in the data, but in the same order I had accessed it in the second case.

The first case was obviously the fastest. The second case was roughly 5X slower. And finally the 3rd case was roughly another 10X slower (total 50X). Obviously the hardware/complier was able to optimize for the array of pointers somewhat, yet still there was a 5X degradation. Bottom line ….. benchmark.

From what you guys are saying the conclusion is that using memory well is an issue only where it counts (large amounts of memory involved, speed concerns etc.). So basically it`s not a guiding principle in programing, if you think you should use memory use it, don`t complicate your program to keep memory usage low always just when there are good reasons at stake.

My project`s facebook page is “DreamLand Page”

The underlying principle here is “invest effort where it makes a difference" (or in your words, “where it counts”).

EDIT: It applies in general thus, for many possible topics, such as memory, but eg also performance.

Calin said:
So basically it`s not a guiding principle in programing, if you think you should use memory use it, don`t complicate your program to keep memory usage low always just when there are good reasons at stake.

Don't prematurely optimize. Use it if you need it. Don't try to make things faster until you've measured and you know you actually need to.

In cases of memory, recognize the reality of the machine you're on. You're on a machine with gigabytes of real memory and hundreds of gigabytes of virtual memory, a machine that can casually throw around hundreds of megabytes for tasks without too much bother. You can spawn a few dozen Chrome tabs each using a hundred megabytes or more and the machine won't break a sweat. Don't be wasteful about your practices, but if you need kilobytes or even megabytes here are there, use what you need to get the job done.

This topic is closed to new replies.

Advertisement