Is it just me or do these programming techniques suck?

Started by
36 comments, last by Hamster 18 years, 10 months ago
Quote:Original post by Teknofreek
The only one that holds some merit to me is not allocating any memory in the core code. You could use custom allocators, but that scheme has its own quirks. For example, what happens if you want to use different allocation strategies for large and small objects? What if you want to use different strategies for objects that are allocated once per game/level/etc versus objects that are constantly being created and destroyed during the game. And so on. The problem with simply providing alloc/free callbacks is that it might not allow for enough customization.

Another way to go about it would be to allow the users to customize the memory allocation via template policies. That can also get a bit icky, hard to debug, and it may confuse some less experienced programmers. Personally, I think I prefer this approach to simple callbacks though.

You're right, a simple callback approach would not be as flexible as the current system. To address those limitations my preference would be for some kind of template based allocator scheme as well, something like the STL allocators. I don't think it would be any more confusing for less experienced programmers than the current scheme. In fact, for less experienced C++ programmers it's probably much less confusing. C programmers might be more comfortable with the current system. A template based allocator scheme can be made easy in the simple case (for STL containers you don't even have to know the allocators exist if you're happy with the default behaviour) and just as flexible in the general case. It would require a bit more up front engineering and design effort but I think the payoff would be well worth it.
Quote:
However, I think the real reason that the no-allocation policy feels so icky in this case is because of #2. If the class size was consistent, it would be easy to write some wrappers to allocate arrays, and so on.

That definitely makes things worse. What really bothers me is that it throws out the window any possibility of the compiler helping you catch your mistakes. Subverting the type system like this makes it much harder to catch errors at compile time which is where they are easiest and cheapest to fix.
Quote:
Well, if you are working with the same stuff I'm working with...good luck! The tools that accompany the stuff I'm working with make the engine-side code look like gold!

Ha, we very well might be working on the same project if that's the case [smile]

Game Programming Blog: www.mattnewport.com/blog

Advertisement
Quote:Original post by Saruman
Quote:Original post by mattnewport
1.) Coding standard that requires no headers to include other headers.

David Eberly actually talks about this in his Game Engine Architecture book. One reference that he makes is when he worked for a company that had all the includes in the headers... and when he went through and changed it the build times went from ~30 minutes to ~3 minutes.

I haven't read that book, I've read 3D Game Engine Design but not the Architecture book. I'd be surprised if he really went through and removed all the includes from the headers and got a speed up. Removing all unnecessary includes from the headers would speed up the build but if you remove the necessary headers you've just got to move the same includes into the .cpp files so I don't see how you'd get a speed up. As I said before I think it's vitally important on a large C++ project to minimize your physical dependencies but not including headers in other headers does nothing to achieve that; you still have exactly the same physical dependencies it's just you have to manually specify them everywhere you include the header.

Game Programming Blog: www.mattnewport.com/blog

1:That's weird. It's exactly replacing "good dependancy" (trackable by ANY automatic tool that looks for includes) with "bad dependancy" where you have no idea WTF header needs to work, and in what order. It merely hides dependancy, makes dependancy be more obscure.
Only removal of unnecessary includes could speed it up. compilers are good at handling #pragma once or even could deduce it from definitions.

2: Premature optimization?
I did it only once for really heavily used struct, when performance is critical without doubt (software renderer). But, doing that with classes? OMG.

3:
if there was any reason behind 1 (premature attempt at compile speed optimization?), it's destroyed by 3 .

4: if 2 is not scary yet, that makes 2 even more scary.
I don't like allocations either, but sometimes 'em are unavoidable, and to avoid, one just allocate big chunks which sucks.
I prefer to allocate everything that possible at initialization, and possibly do some allocations in render-prepare time.(work on software non-realtime renderers) but really, it wouldn't be possible to write what i'm working on now with such rules.

in summary, lot of people do completely idiotic things. Sometimes(hopefully rarely enough) even I do something stupid.
Quote:Original post by mattnewport
I've seen plenty of books that say any header should be able to compile on it's own as the only line in a .cpp file. This prevents situations where you must include headers in exactly the right order to get anything to compile.


But each .cpp file would compile on it's own. It will also remove nasty polution of a namespace. The best solution would be don't use .h files at all - use Java multiplatform, async memory allocation, inner classes, escape analysis, heavy inlining independend on compile time informations. The other alternative is to remove .h files from C++ standard, and change datatype definition in C++ standard to fixed predefined size. BTW most of large projects did that by macros. __int64 as long and __int32 as int

Quote:handling include guards that this makes a difference to build times? Is it worth the pain?


You are using include guard? Oups, I use Eclipse and dynamic recompilation in such stages of the project, if first global compile after multiple independend standalone compiles didn't work without problems. And when I use C++ I'd let compiler do its work without compiler guards. It could compile it on background in a separate directory and I, if neccessary, could look at the error output of the previous build. Does Eclipse have something like a fast C++ recompiler? It has several plugins one of them could do it considerably easier.

Quote:No run time memory allocation.

They are scared of memory allocaction? Some Java developers were scared by realocation of a small objects. They were sisies. If it isn't in tight loop... it's nearly harmless. In comparison to Bush talks... ~_^

What if you'd build your own memory manager? It might simplify and speed up things, and there are some fast memory managers for speedy DEALOCATION, like copying generational garbage collector.

Quote: you can't use sizeof(); you can't create arrays of the object; you have a maintenance nightmare (since the actual layout of the data is not defined explicitly anywhere, it is all implicit in the code that manipulates the data); you have no hope of automating serialization by analyzing the declaration of the class

sizeof() isn't neccesary reliable in every situation. Size of [FP] would be 80 bits, but size of [SSE2/4] would be 32 bits. It's the same datatype. Of course C++ doesn't alow so much low level, but no sizeof() is no loss, it's easy to work even without it. (Imagine how many times Java programmers wanted to know how many big is theirs class or datatype. Would you believe they finished theirs program without knowing such information, and it was working? Actually the answer was: It depends, on JIT optimalizations, amount of memory, and CPU architecture. For example refferences are 64 bits on 64 bit computers. And yes you can have in one program sizeOf(byte) both equal 1 and 4.)

Do you claim you can't do:
int32[] refferences = new int32[amountOfObjects];
class a = new someClass();
refferences[i++] = a;

I think it would work without problems. Just be careful with nulling.

Automatic serialization is automatic nightmare after you discover you could do it better, change serialization algorithms...

You know size and you could blit it to file, so these worries are too early... Just you wait at the end of the project. ^_^



Quote:Original post by Raghar
Quote:Original post by mattnewport
I've seen plenty of books that say any header should be able to compile on it's own as the only line in a .cpp file. This prevents situations where you must include headers in exactly the right order to get anything to compile.

But each .cpp file would compile on it's own. It will also remove nasty polution of a namespace.

Not sure exactly what you're getting at there, could you elaborate?
Quote:The best solution would be don't use .h files at all - use Java multiplatform, async memory allocation, inner classes, escape analysis, heavy inlining independend on compile time informations. The other alternative is to remove .h files from C++ standard, and change datatype definition in C++ standard to fixed predefined size. BTW most of large projects did that by macros. __int64 as long and __int32 as int

Using Java is not an option on this project, several of the platforms we have to support have no Java runtime and even if they did Java's performance characteristics are not appropriate for this software. Also, all the clients for our library use C++. It would be great if C++ didn't need header files and had a compilation model more like Java or C# but it doesn't so we have to make the best of what's available. C99 defines fixed size integer and floating point types, we plan to adopt those types for this project.
Quote:
Quote:handling include guards that this makes a difference to build times? Is it worth the pain?


You are using include guard? Oups, I use Eclipse and dynamic recompilation in such stages of the project, if first global compile after multiple independend standalone compiles didn't work without problems. And when I use C++ I'd let compiler do its work without compiler guards. It could compile it on background in a separate directory and I, if neccessary, could look at the error output of the previous build. Does Eclipse have something like a fast C++ recompiler? It has several plugins one of them could do it considerably easier.

We're not using Eclipse, I'm not really familiar with it. I'm not sure you understand what you're getting at with the include guards - I don't know of any C++ compiler that doesn't require either include guards or a proprietary equivalent (like #pragma once) for correct operation. If you want to stick to standard C++ and get correct behaviour you have to use include guards and you have to hope that your compiler is smart enough to only open the files once (most are these days).
Quote:
Quote:No run time memory allocation.

They are scared of memory allocaction? Some Java developers were scared by realocation of a small objects. They were sisies. If it isn't in tight loop... it's nearly harmless. In comparison to Bush talks... ~_^

What if you'd build your own memory manager? It might simplify and speed up things, and there are some fast memory managers for speedy DEALOCATION, like copying generational garbage collector.

On many of the platforms we work with and with the near realtime constraints we have to operate under there are good reasons to worry about run time memory allocation. It's not just the speed of the allocation we have to worry about but the fact that there is a hard limit on the total memory available, possible concerns about fragmentation due to not all platforms supporting virtual addressing, etc. Generally the clients of our libraries expect to be able to control all the memory allocations our library uses so writing our own memory manager isn't really an option.
Quote:
Quote: you can't use sizeof(); you can't create arrays of the object; you have a maintenance nightmare (since the actual layout of the data is not defined explicitly anywhere, it is all implicit in the code that manipulates the data); you have no hope of automating serialization by analyzing the declaration of the class

sizeof() isn't neccesary reliable in every situation. Size of [FP] would be 80 bits, but size of [SSE2/4] would be 32 bits. It's the same datatype. Of course C++ doesn't alow so much low level, but no sizeof() is no loss, it's easy to work even without it. (Imagine how many times Java programmers wanted to know how many big is theirs class or datatype. Would you believe they finished theirs program without knowing such information, and it was working? Actually the answer was: It depends, on JIT optimalizations, amount of memory, and CPU architecture. For example refferences are 64 bits on 64 bit computers. And yes you can have in one program sizeOf(byte) both equal 1 and 4.)

sizeof(float) can vary across platforms but on any given platform it's always fixed for any specific bit of code. Anyway, the real problem with this system is not so much that the user doesn't know the size of the class but that the compiler doesn't know the size of the class. That means you can't declare arrays of the type, use pointer arithmetic, use new or delete on the object, etc.
Quote:
Do you claim you can't do:
int32[] refferences = new int32[amountOfObjects];
class a = new someClass();
refferences[i++] = a;

I think it would work without problems. Just be careful with nulling.

That looks like Java to me so no, we can't do that.
Oops, that AP was me.

Game Programming Blog: www.mattnewport.com/blog

Quote:Original post by Dmytry

I don't like allocations either, but sometimes 'em are unavoidable, and to avoid, one just allocate big chunks which sucks.
I prefer to allocate everything that possible at initialization, and possibly do some allocations in render-prepare time.(work on software non-realtime renderers) but really, it wouldn't be possible to write what i'm working on now with such rules.



This may sound stupid, but what exactly defines a run-time allocation? How does this differ from other types of allocation?
(2) sounds like a flexible array member in C99. It was non-standard, but often-used technique in C (Used often enough that the committee finally gave the technique its blessing). The C99 version goes something like:

struct SomeStruct {
int somevar;
// ...
char anotherVar;
int data[];
};

sizeof returns the size of the struct less the last element (although it will include any padding between anotherVar and data).
Quote:Original post by Anonymous Poster
Quote:Original post by Dmytry

I don't like allocations either, but sometimes 'em are unavoidable, and to avoid, one just allocate big chunks which sucks.
I prefer to allocate everything that possible at initialization, and possibly do some allocations in render-prepare time.(work on software non-realtime renderers) but really, it wouldn't be possible to write what i'm working on now with such rules.



This may sound stupid, but what exactly defines a run-time allocation? How does this differ from other types of allocation?

It may mean literally anything, as any allocation is "runtime". What is usually meant, allocations and deallocations during most of program run time. Like allocating string for user input when he enters this input. I wouldn't want allocations to happen in pixel loop of raytracer, for example; but allocations prior to rendering are okay. Number of allocations and deallocations should be keept minimal.

edit: indeed, also there could be memory fragmentation issues. But you still can use good garbage collection in such case.

[Edited by - Dmytry on June 17, 2005 4:58:05 PM]
I've worked on an embedded system that in some cases was to run for weeks at a time (preferably indefinitely). Using new and delete in this system resulted in memory fragmentation over time and so the system would slow down considerably after a few days. It's really hard to sort out, so I think 4 is a good policy for some systems.

FWIW, Fortran77 doesn't have dynamic allocation.

This topic is closed to new replies.

Advertisement