Exceptions...

Started by
60 comments, last by null_pointer 23 years, 10 months ago
quote:Original post by null_pointer
If you re-read at my examples and explanations in my previous post, you''ll see that is not my point. I know how exceptions work. I do understand how destructors are called, why they are called, what auto_ptr does, etc. I was not questioning the validity of auto_ptr as a replacement for memory allocated on the heap, but the validity of auto_ptr as a replacement for memory allocated on the heap that could have been allocated on the stack.


For objects that will live beyond the lifetime of the generating function, it is more efficient (aside from built-in types and extremely small classes) to allocate such objects on the heap. Heap space is normally a lot bigger than stack space.

MSN
Advertisement
msn12b: I was not aware that allocating memory on the stack in the calling function and passing it via a pointer to the called function involves more overhead than allocating memory on the heap in the calling function and passing it via a pointer to the called function. I am most certainly aware of the fact that putting a 4-byte pointer (on 32-bit platforms) on the stack is much more efficient than putting a large object (2 MB) on the stack, and involving copy constructors.

What, exactly, are you trying to say?


- null_pointer
Sabre Multimedia
quote:Original post by null_pointer
What, exactly, are you trying to say?


Apparently we''ve managed to see two different things. Basically, if a function f needs to return a large object, it is better to return memory from the heap. If, however, during the lifetime of f() an exception is thrown, it is easier to clean up the allocated memory with auto_ptr than it is to write an equivalent try/catch block.

Allocating on the stack is fine if an object does not outlive the lifetime of the function in which it (the object) is instantiated in. If it does, the object should be created on the heap.

MSN
msn12b: Thanks for clearing that up!

Now I'd like to discuss something that's been bothering me, but I kept my fingers away from the keyboard for a while because I didn't want it to complicate the issue. I hear a lot about code responsibility, and there's an old C++ adage, that those who "new" something must also "delete" it. Why would you want to allocate memory on the heap and then return? I should think it would be less work to allocate it on the stack in the calling function, then pass it to the called function for initialization or whatever, and then use it in the calling function. Here's an example:


// this is the class
class foo { int dummy[64]; }; // for size

// these are the two functions
void create_foo_and_do_tasks(); // calling function
void initialize_foo(foo* p); // called function


void initialize_foo(foo* p) { /* do something... */ }


void create_foo_and_do_tasks()
{
foo my_foo; // stack allocation of ~256 bytes

initialize_foo(&my_foo);

// do something else...

// my_foo is automatically destroyed here
}



Is that really slower than this?:


class foo { int[64]; }; // for size

void do_tasks(); // calling function
foo* create_foo_and_initialize_it(); // called function

foo* create_foo_and_initialize_it()
{
auto_ptr p = new foo;

// do something...

return p.release();
}

void do_tasks()
{
// heap allocation of 256 bytes
auto_ptr p = create_foo_and_initialize_it();

// do something else...

// p goes out of scope and destroys the new foo
}



Perhaps I am not understanding the instances in which you would want a function to allocate memory and then return. Oh, well. Let's take a look at the "stretched out" versions of these functions, to see what total code we have actually used.



// #1

void create_foo_and_do_tasks()
{
foo my_foo; my_foo::foo();

// do something...

// do something else...

my_foo::~foo()
}




// #2

void create_foo_and_do_tasks()
{
auto_ptr p1 = new foo; p1::auto_ptr();

// do something...

auto_ptr p2 = p1.release(); p1::~auto_ptr();

// do something else...

p2::~auto_ptr(); __unnamed::~foo();
}



I've even put in the constructor and destructor calls, but I could include the if() statements auto_ptr must use to manage the heap memory, if you like. Let's look at #2 again...


// #2

void create_foo_and_do_tasks()
{
auto_ptr p1 = new foo;
p1::auto_ptr() { ownership = false; p = NULL; }
p1::operator=(foo* __unnamed) { if( ownership ) delete p; p = __unnamed; ownership = true; }

// do something...

auto_ptr p2 = p1.release();
p1::release() { ownership = false; }
p2::auto_ptr() { ownership = false; p = NULL; }
p2::operator=(foo* __unnamed) { if( ownership ) delete p; p = __unnamed; ownership = true; }
p1::~auto_ptr() { if( ownership ) delete p; }

// do something else...

p1::~auto_ptr() { if( ownership ) delete p; }
*p::~foo() {}
}



Call me crazy, but #2 seems to take a lot of code and function calls to do what #1 did with simple stack allocation. I don't understand why it would be necessary to create an object on the heap in a function and then return it...that's bad coding in my book, but perhaps that's just inexperience talking. I have to go now, so I'll add more to this later.


*later*


OK, I'm back now. The (large) example I just listed was my point about examining "total" code and not "perceived" code. It's easy to dismiss auto_ptr as "necessary" sometimes without even thinking of how it might be done better. But if you list it all in one straight line - like the compiler sees it - then it is a simple task to evaluate it objectively. Classes don't auto-magically make faster, cleaner code, and they obviously don't fix logic flaws just by putting some code in member functions. I'm all for encapsulation, but let's make sure it's right first, ok?

A good function contains no code above what is required to do the task without the function. A good class contains no code in its member functions above what is required to do the task without the class.

I have to get offline now, again.



- null_pointer
Sabre Multimedia


Edited by - null_pointer on May 23, 2000 10:12:24 AM
quote:Original post by null_pointer

I hear a lot about code responsibility, and there''s an old C++ adage, that those who "new" something must also "delete" it. Why would you want to allocate memory on the heap and then return?


Well, to be pedantic, many functions need to allocate memory and not delete it. For example, most constructors do this I think you really meant "Why would you want a class to allocate memory on the heap and not delete it itself?"

One answer is Abstract Factories. Eg:
String type;// Read the type in from the filecin >> type;// Now create an instance of that type, adding it to the listBaseAbstractObject* newObj = Factory.CreateObject(type);MainObjectList.push_back(newObj)...etc... 

Factory.CreateObject() is responsible for allocating the memory based on the type given to it. It is the only good way I am aware of for solving the problem both you and I came up against recently: being able to instantiate a given class in a hierarchy. The Factory class has no knowledge (nor should it) of the lifetime of the memory it allocates.

Sometimes you have to pass on responsibility to a different part of the system. Providing that assignment of responsibility is known and documented, that''s ok.

Your example of stack-based vs. auto_ptr based is ok in that, in that case, there is no real difference (that I can see, anyway. My original example was perhaps not the best. However, there are better reasons, which I have thought of since then One contrived example: there are some classes which are set up so you cannot allocate them on the stack. Maybe someone more knowledgeable than me can remember why this might be useful or necessary... there is also sometimes a limit to the size of objects you can create on the stack, I believe? Is it ok to do, for example: int double_buffer[1024][768]? The main example I can think of is this, however: polymorphic classes. Let''s say I have a function which calls that Factory method above to create an object (of an unspecified subclass) which has a limited scope of this single function. I cannot allocate that object on the stack, since I don''t know what type it is at compile time. Therefore I need to deal with a pointer to the base class, ruling out a stack approach. The safest and cleanest way to deal with pointers while ensuring deletion (and our mutual favourite, exception-safety) would be to encapsulate the result in an auto_ptr.

You''ll find more auto_ptr advocacy in the More Effective C++ books, by Scott Meyers, who seems to know what he''s talking about.

I still contend that allocating memory is not part of the algorithm I described in a previous post. It is necessary for it to function, but then so is electricity to the CPU. It is an implementation detail, not an algorithmic detail. Most algorithm books will not detail memory allocation. Does it not make sense to move that out of the function performing the algorithm? In my opinion, memory allocation and deallocation are prime candidates to be abstracted away.
Briefly, a couple of other points I forgot last time. Sorry it''s in a separate post, but I had several msgs to reply to having been away.

quote:Original post by null_pointer

The following code snippet is ludicrous:


void do_something()
{
bool own_memory = false;

byte* heap_memory = new byte[256];
own_memory = true;

if( own_memory )
delete[] heap_memory
}



Isn''t this ludicrous, too:


void do_something()
{
auto_ptr heap_memory(new byte[256]);
}



It uses the "same" code.

I said, if you need it, use it. Otherwise, simple calls to new() and delete() will do. To put it another way, if you are going to add in a bool anyway to keep track of it, then use auto_ptr.

The point of encapsulation is to organize code, and not to cover the programmer''s mistakes or to save typing (although it tends to do that).


Heh. A good compiler will probably optimize away the bool stuff in the first example anyway

Saving typing reduces redundancy. Reducing redundancy minimizes the chance of making mistakes. Mistakes cost time and/or money.

Any time you find yourself repeating something, it should be a candidate for abstraction and/or encapsulation. I showed examples where ''delete'' was used at several different points within a function, therefore abstracting it away makes sense. Your comments on ''just use the stack'' were valid, but were addressed in the other message.

quote:
Within one "layer" of software, you have no dependencies, and thus assertions have every advantage over exceptions because it is not conceivable that you will ever need to clean up -- every condition is known and can be easily controlled. Exceptions are merely excess overhead and totally unnecessary here.

The only problem with case 2 is that it doesn''t exist in real life! "Layers" of software aren''t just classes; they include the OS, the BIOS, other programs, different parts of your program, the classes they use, etc. Having everything co-dependent upon each other with assertions is the equivalent of mass suicide in release apps (as you pointed out). In debug apps it is equally unacceptable for me, as I do not want my dev PC crashing constantly. Exceptions provide a stable environment for debugging. Sometimes you must have a performance hit to do it right.


I''ve never found assertions to cause my PC to crash. The only problems I can see are in DirectX exclusive mode: and I use logging and exceptions here, as there is no alternative.

I am not concerned about performance at the pre-release stage; I am concerned with correctness. Now, since things -will- go wrong, I need to be able to track them. Exceptions hose the stack. I need to be able to see the stack to see what went wrong. Therefore using an exception is counterproductive. 99% of the time, my PC and debugger can deal with resources not being deallocated immediately, and it would slow down development to have these objects deconstructed before I can see what was in them. Whatever other benefits exceptions have, they don''t help for debugging.

quote:
1) I''m a C++ programmer. I had better know what delete() does!

Yes, and the general idea is that once you have mastered a concept, you encapsulate it and move on. Of course you need to know what it does. But it doesn''t mean you need it everywhere explicitly.

quote:
3) Extra effort must be justified. Every C++ programmer had better know how to use new() and delete(). Garbage collection in C++ is, then, extra work. Can you justify it?

I was under the impression that you disagreed with garbage collection on principle, rather than just garbage collection in C++. Apart from simplistic implementations such as auto_ptr or fairly basic reference counting, I am not interested in garbage collection in C++ either. But I think it works wonderfully for Java and doesn''t ''introduce bugs'' any more than null pointers and memory leaks do in C++.
However, I am interested in abstracting away memory management wherever I can, in the interests of robustness and removing implementation details from algorithms.
quote:
If you are only looking to garbage collection to hide your own inability to understand new() and delete(), then you have a rude awakening ahead.

In C++, this is true. In other languages, it isn''t, and shouldn''t need to be.
quote:Original post by Kylotan

Well, to be pedantic, many functions need to allocate memory and not delete it. For example, most constructors do this I think you really meant "Why would you want a class to allocate memory on the heap and not delete it itself?"


Actually, the phrase I used was "I don''t understand why it would be necessary to create an object on the heap in a function and then return it..." Constructors don''t have return values...

The point of that example was to show that when evaluating whether an abstraction is correct, it is necessary to examine the total code and not perceived code. (ok, that''s about the 3rd time I''ve said this.) I''m all for encapsulation, but I want to make sure it will benefit me. There are 2 cases here:

1) Proper encapsulation does slow the code down.

    If this is true, then the encapsulation is merely a filler for the programmer''s own inability. Otherwise, encapsulation would be useless.



2) Proper encapsulation doesn''t slow the code down.

    If this is true, then the encapsulation is an organization and delegation of authority. Encapsulation in this instance fulfills its purpose completely.



If this is inexperience, I will run into a brick wall eventually.


quote:Original post by Kylotan

Factory.CreateObject() is responsible for allocating the memory based on the type given to it. It is the only good way I am aware of for solving the problem both you and I came up against recently: being able to instantiate a given class in a hierarchy. The Factory class has no knowledge (nor should it) of the lifetime of the memory it allocates.


As I corrected myself before, why do you use abstract base class pointers if you need to know the type of the object anyway? The whole purpose of abstract base classes is so that you do not need to know the type of the object to use it. So, logically, my problem didn''t exist. I was making a mountain out of a molehill, so to speak.

Then what is the correct problem? It is the simple fact that the user must generate his own code for creating and destroying his own hierarchy. It''s a simple matter of adding a few data members to a window-derived class and taking care of their construction/destruction. For me to do this without knowing the user''s source code would require a few "tricks" and a lot more overhead than is necessary. There was no solution because there was no problem!

See how easily I was mistaken? I even emailed Mr. Stroustrop asking for a change in the language! LOL (I wonder how many emails he gets on that subject?)


quote:Original post by Kylotan

One contrived example: there are some classes which are set up so you cannot allocate them on the stack. Maybe someone more knowledgeable than me can remember why this might be useful or necessary... there is also sometimes a limit to the size of objects you can create on the stack, I believe? Is it ok to do, for example: int double_buffer[1024][768]?


The Windows stack starts at 4 KB and grows until it hits 1 MB, and then it gives you a nice "stack overflow" error.

The stack was not made for the creation of large objects, 1) because it''s inefficient for the function to create a huge object (meaning > 64 KB) on the stack where it is simply discarded after the function returns, and 2) because it''s rare; usually larger objects need to stay around for more than one simple function call. I don''t recall ever allocating an object that would cause a stack overflow using this methodology.

And auto_ptr would make a clean solution because of its inline functions, depending on the function. "Stretch" the function out and ask yourself if it couldn''t be done better another way.


quote:Original post by Kylotan

I still contend that allocating memory is not part of the algorithm I described in a previous post.


It''s part of the function, just as much as the "algorithm" is. The memory management is always going to be part of the code that implements the algorithm, whether you have to create an object to track it for you or not. Ask anyone who works with algorithms; they''ll tell you that they are quite separate from the code used to implement them. Perhaps I used the wrong word in a previous post? I apologize, if I did.


quote:Original post by Kylotan

Heh. A good compiler will probably optimize away the bool stuff in the first example anyway

Saving typing reduces redundancy. Reducing redundancy minimizes the chance of making mistakes. Mistakes cost time and/or money.

Any time you find yourself repeating something, it should be a candidate for abstraction and/or encapsulation. I showed examples where ''delete'' was used at several different points within a function, therefore abstracting it away makes sense. Your comments on ''just use the stack'' were valid, but were addressed in the other message.


Yes, it will optimize with the first example and not with auto_ptr. But anyway, my point there was that you should compare the actual total code used and not just some imagined helper class that can do anything. auto_ptr is correct in its place, but any programmer must discover that place before he can use it.

BTW, on that redundancy topic, you''re paraphrasing what I said many times much earlier in this post, which was: "I learned a while ago that if you are repeating code anywhere, there is a flaw in your design." This does not apply to language features but to groups of language statements. What I just meant was that if() is not a target for encapsulation just because I use if( p == NULL ) a lot.


quote:Original post by Kylotan

I''ve never found assertions to cause my PC to crash. The only problems I can see are in DirectX exclusive mode: and I use logging and exceptions here, as there is no alternative.

I am not concerned about performance at the pre-release stage; I am concerned with correctness. Now, since things -will- go wrong, I need to be able to track them. Exceptions hose the stack. I need to be able to see the stack to see what went wrong. Therefore using an exception is counterproductive. 99% of the time, my PC and debugger can deal with resources not being deallocated immediately, and it would slow down development to have these objects deconstructed before I can see what was in them. Whatever other benefits exceptions have, they don''t help for debugging.


1) I have definitely had them cause my PC to crash, especially with MFC and DX. MFC causes me to hit the X button on the compiler after I have looked at the assert, which will deallocate my program''s memory and some of the Windows resources that I used, but not all the of the Windows resources, and it will not Release() any DX stuff. After a few times of this, the menus in Explorer get garbage in them, the taskbar looks odd, and DX starts to return DDERR_OUTOFMEMORY and DDERR_GENERIC quite often. That''s just a sampler, too. IMO, Assertions are a professional way to produce memory leaks in a debug app.

2) I personally hate assertions because they plop me right into the (IMO poorly commented and poorly written) MFC source code, with something like "nHashSize != nGrowBy+1" or something like that. How do you explain how that supports encapsulation and abstraction? In many cases I have to go through all of the disorganized source code and locate the header for the class declaration, and weed through quite a few comments before I can discover the purpose of the variables. (No one has answered this question since I posted it way back. Eiffel is supposed to be the "incarnation" of OOP, isn''t it?) The proper documentation for the source code depends on the person looking at the code. The user of the class should NEVER need to look at the comments in the code; that is a job for large manuals and tutorials. Otherwise, the class''s interface is pointless. The writers of the class SHOULD need to look at the comments, AND some docs on the way the class works. Assertions break encapsulation.

3) You can easily set up exceptions to bring up a dialog box in Windows, or a message in any other app. I did -- it easily reported the values of 6 variables, the function location, the type, a LONG message description, the class, the label, and the operation being performed. You could pass as many of those as you wanted; the constructor had defaults for all of them. When an exception was triggered, I could choose to display it before I threw it or display it in another catch() statement. Typical debugging involves about 3 to 4 "active" catch() statements at different places in the code. Therefore, it is not hard to track where they are if you choose to throw() them to somewhere else before reporting. Oh, BTW, you can break into your code with any decent compiler when the message pops up. The real satisfaction from this approach is that once you hit the continue function on the debugger and hit OK, the exceptions make sure everything is cleaned up! As I said before, assertions are merely a lousy form of exceptions. Eiffel may offer some glitter to assertions, but they''re still short of usefulness.

4) What happens with assertions in things like fullscreen DX? You said yourself they are unusable in that environment. Why should they be? They are incapable of providing a proper error trapping mechanism, because the mechanism that they are dependent upon is faulty. With exceptions, you could simply catch() it way back in main() and presto! DX would have been cleaned up. You could also have it log the variables to a file, play a sound, whatever you want. You can make an exception do almost anything you want it to do, while assertions just sit there. It''s kind of like the difference between a real puppy and a porcelain figure of a puppy; the porcelain figure may behave better then the real puppy when you tell it to sit, but what happens when you want something different?

Since exceptions can do everything assertions can do and better and more , why use assertions?


quote:Original post by Kylotan

Yes, and the general idea is that once you have mastered a concept, you encapsulate it and move on. Of course you need to know what it does. But it doesn''t mean you need it everywhere explicitly.


I think I''ll encapsulate an if() statement in a function object... C''mon! I said let''s make sure the encapsulation is correct first! Otherwise the encapsulation will only hinder the use of the concept. Encapsulation, if used incorrectly, will create more problems than it solves.

(That''s why it appears discouraging to people learning C++ - they have trouble creating proper encapsulations so they never see the benefit. And that''s why "professional" game programmers make the statement "encapsulation is useless for high-performance software like games." That is the equivalent of saying that brakes are useless for high performance vehicles like stock cars or dragsters. The faster you want your code to be and still operate on a bug-free level, the more organization you need. No bones about it.)


quote:Original post by Kylotan

I was under the impression that you disagreed with garbage collection on principle, rather than just garbage collection in C++. Apart from simplistic implementations such as auto_ptr or fairly basic reference counting, I am not interested in garbage collection in C++ either. But I think it works wonderfully for Java and doesn''t ''introduce bugs'' any more than null pointers and memory leaks do in C++.

However, I am interested in abstracting away memory management wherever I can, in the interests of robustness and removing implementation details from algorithms.


I did disagree with garbage collection on principle. I still do. Java, even apart from all of its other shortcomings, is incredibly slow. I also hear (from major computing magazines and Java references) that it''s a real pain getting any decent program written in Java to run on every platform. I disagree with Java''s methodology.

&ltpedantic rant="Java"&gtWhy would anyone want to use a VM for regular software? As a scripting language for web pages, it may be valid, but for commercial programs? Anybody who thinks a full VM can compete with native software is: 1) ignorant, or 2) out of his mind. A good language with a good standard (C++) provides a virtual PLATFORM and not a VM. A virtual platform uses code that is compiled and optimized for the target platform, and NOT some computer that doesn''t exist so it has to be re-emulated on a real computer. Software that runs on many machines should use the same source and different generated machine code. The only thing that a VM will ever be useful for is for running old software that requires old hardware that doesn''t exist anymore.&ltpedantic url="http://come.to/sabremultimedia">

I don''t consider auto_ptr to be garbage collection, because it''s merely a replacement for what I would have used anyway. Just like "if( p ) { delete p; p = NULL; }." Back to the main point. Why abstract away memory management? It''s an essential part of the C++ language, and it should be second nature to any good C++ programmer to know everything that could happen between new() and delete(). It''s part of being a good programmer. Just like knowing what if() does and what for() does. Same thing with throw(), catch(), etc.


quote:Original post by Kylotan

In C++, this is true. In other languages, it isn''t, and shouldn''t need to be.


In other languages, you still pay for garbage collection in speed and flexibility. You don''t just get "magical" "free" garbage collection because the language does it automatically for you. I still think that in many languages, memory allocation/deallocation is a very important issue and cannot be abstracted away. Somewhat like what type of numbers you use for 3D calculations. You just can''t substitute ints for floats or doubles because "it''s not part of the algorithm"...

The fact is that you have two things when you go to code an algorithm: the "ideal" pseudo-code-like description of the algorithm, and the language-specific code. They are separate. Memory management is an essential part of the language-specific code for the algorithm, and "abstracting it away" simply because you don''t want to deal with it will cause you more problems than good. You must make sure the encapsulation is correct. Taking one thing - encapsulation in this case - and making it into the all-powerful determination of right and wrong is the essential human error (with the word "essential" meaning "the essence of" and not "mandatory").


Is it just me, or have people purposely not posted source code to demonstrate their points?


- null_pointer
Sabre Multimedia
quote:Original post by null_pointer

Actually, the phrase I used was "I don''t understand why it would be necessary to create an object on the heap in a function and then return it..." Constructors don''t have return values...


Check your post: you omitted the word ''it'' from the end, hence the confusion

However, the factory method I posted is a good example of a non-constructor function that does this.

quote:
The point of that example was to show that when evaluating whether an abstraction is correct, it is necessary to examine the total code and not perceived code. (ok, that''s about the 3rd time I''ve said this.) I''m all for encapsulation, but I want to make sure it will benefit me.


I''m sure that an experienced asm programmer could go through some of your compiled C++ code, even post-optimization, and find some redundant operations in there. The performance hit might be minimal, but it will be present. This is the negative effect on performance you get in ''encapsulating'' assembly code in a high level language.

quote:
There are 2 cases here:

1) Proper encapsulation does slow the code down.

    If this is true, then the encapsulation is merely a filler for the programmer''s own inability. Otherwise, encapsulation would be useless.


Encapsulation almost always involves an extra dereferencing of a pointer, at a minimum. This slows code down whether you like it or not. Encapsulation almost always means indirection and indirection means an inherent loss of efficiency as a tradeoff for having a better interface.

quote:If this is inexperience, I will run into a brick wall eventually.


Perhaps you are expecting every programmer to be perfect. No programmer will ever be perfect. I am sure Stroustrup has had the occasional bug in his code too. Programmers are not perfect and therefore presenting ''safer'' and ''quicker to program'' interfaces that perhaps lose some performance are nearly always a good tradeoff. Sure, you could code for every video card separately, and have a slightly faster application, but it''s easier and safer to let DirectX do the conditional checking behind the scenes, even though the performance is reduced somewhat. And if you don''t believe performance in Direct3D, for example, is worse than programming the card directly, no-one would ever have had made custom versions of games to bundle with the video cards. To present a unified interface, some things have to be done less efficiently.

If you want to be pedantic, simply using a function call rather than having the code inline incurs a performance hit due to having to push parameters etc. So your ''stretched'' code will always be more efficient, no matter how you do it. You -have- to accept loss of performance to some degree as part of making a cleaner design.

quote:
Original post by Kylotan

Factory.CreateObject() is responsible for allocating the memory based on the type given to it. It is the only good way I am aware of for solving the problem both you and I came up against recently: being able to instantiate a given class in a hierarchy. The Factory class has no knowledge (nor should it) of the lifetime of the memory it allocates.


As I corrected myself before, why do you use abstract base class pointers if you need to know the type of the object anyway? The whole purpose of abstract base classes is so that you do not need to know the type of the object to use it. So, logically, my problem didn''t exist. I was making a mountain out of a molehill, so to speak.

Maybe your problem didn''t exist. But it is just as bad to fill an abstract base class with hundreds of functions that apply to the subclasses, knowing that most of those functions don''t do anything meaningful for most of the children. There is a tradeoff to be made.

quote:
Then what is the correct problem? It is the simple fact that the user must generate his own code for creating and destroying his own hierarchy.

This is what the Factory method I described above encapsulates, so you don''t have to keep rewriting the code to instantiate subclasses. It is a valid and well-documented use for allocating memory on the heap and yet not being responsible for its deletion.

quote:
And auto_ptr would make a clean solution because of its inline functions, depending on the function. "Stretch" the function out and ask yourself if it couldn''t be done better another way.


I still think that a couple of extra lines of code executed is cleaner and safer than numerous extra lines added to the code. You could always write yourself a simpler version of auto_ptr that doesn''t worry about the ownership semantics. But then you could also write your own string class, your own this, your own that… this is expensive on programmer time and buys you very little CPU time.

quote:
It''s part of the function, just as much as the "algorithm" is. The memory management is always going to be part of the code that implements the algorithm, whether you have to create an object to track it for you or not. Ask anyone who works with algorithms; they''ll tell you that they are quite separate from the code used to implement them.

This is by necessity, not by design. So whenever that ''necessity'' can be abstracted away, you do so. Instead of:
int buffer = a;
a = b;
b = buffer;
Most would prefer
Swap(a, b);
Where swap is a template function that encapsulates the above. Same functionality, but the details are hidden, as they are not relevant to the function. They are necessary, but not relevant. Just as the details of the memory allocation are not relevant to the algorithm.

quote:Yes, it will optimize with the first example and not with auto_ptr. But anyway, my point there was that you should compare the actual total code used and not just some imagined helper class that can do anything. auto_ptr is correct in its place, but any programmer must discover that place before he can use it.

Why not with auto_ptr? Optimising compilers can usually spot invariants, and if there is no way that the ownership variable in auto_ptr will be changed, it can take it right out. This is probably very easy with auto_ptr since it is a template class and the code ends up inlined.

quote:
BTW, on that redundancy topic, you''re paraphrasing what I said many times much earlier in this post, which was: "I learned a while ago that if you are repeating code anywhere, there is a flaw in your design." This does not apply to language features but to groups of language statements. What I just meant was that if() is not a target for encapsulation just because I use if( p == NULL ) a lot.

This is a group of language statements. Using auto_ptr encapsulates any number of delete statements. If you will only ever use ''delete'' on that object in one place in the code, then it buys you nothing. Otherwise, it encapsulates the concept of needing to be able to delete it from numerous places. It also provides for exception safety in a function that has no try/catch block. I really do suggest you check the Meyers book I mentioned as he demonstrates quite amply how auto_ptr and effective exception handling go hand in hand.

quote:
1) I have definitely had them cause my PC to crash, especially with MFC and DX. MFC causes me to hit the X button on the compiler after I have looked at the assert, which will deallocate my program''s memory and some of the Windows resources that I used, but not all the of the Windows resources, and it will not Release() any DX stuff. After a few times of this, the menus in Explorer get garbage in them, the taskbar looks odd, and DX starts to return DDERR_OUTOFMEMORY and DDERR_GENERIC quite often.


I am not responsible for poor coding on Microsoft''s part The OS should free all its memory belonging to a process once it terminates, whether the program was coded badly or not. And the GUI should not share memory space with the kernel code, but oh well!

MFC is released code. For them to have assertions inside it is not something I agree with.

quote:2) I personally hate assertions because they plop me right into the (IMO poorly commented and poorly written) MFC source code, with something like "nHashSize != nGrowBy+1" or something like that. How do you explain how that supports encapsulation and abstraction? In many cases I have to go through all of the disorganized source code and locate the header for the class declaration, and weed through quite a few comments before I can discover the purpose of the variables. (No one has answered this question since I posted it way back. Eiffel is supposed to be the "incarnation" of OOP, isn''t it?)


I don''t think anyone was advocating releasing libraries to the public with assertions still enabled (presumably because the library is still buggy?!) Similarly I expect Eiffel doesn''t have assertions in code that you are never going to see, however it encourages them as a way to enforce correctness in client code.

quote:3) You can easily set up exceptions to bring up a dialog box in Windows, or a message in any other app. I did – it easily reported the values of 6 variables, the function location, the type, a LONG message description, the class, the label, and the operation being performed.

What about variables declared inside your try block? Are they not deallocated by the time your catch block executes? Are you advocating no allocation of memory inside try blocks?

And what about the function that called that function? What were its variables? Or the function that called that? How much work would you have to go to in order to see that information using exceptions? And in an easily manipulable form?

The answer is, it''s not worth bothering simulating that yourself. That is what the debugger is for. Whether you are using the nice one that comes with Visual C++ or GDB for Unix, they are very very handy and already provided for you.

quote:Oh, BTW, you can break into your code with any decent compiler when the message pops up.


After numerous stack-based (and pseudo_stack based) variables have already been deallocated and are now unviewable, sure.

quote:4) What happens with assertions in things like fullscreen DX? You said yourself they are unusable in that environment. Why should they be? They are incapable of providing a proper error trapping mechanism, because the mechanism that they are dependent upon is faulty. With exceptions, you could simply catch() it way back in main() and presto! DX would have been cleaned up. You could also have it log the variables to a file, play a sound, whatever you want. You can make an exception do almost anything you want it to do, while assertions just sit there. It''s kind of like the difference between a real puppy and a porcelain figure of a puppy; the porcelain figure may behave better then the real puppy when you tell it to sit, but what happens when you want something different?


When you want something different, you use something different. I use assertions when I do not want something deallocated, and I use exceptions when I do, or when an assertion will. If I didn''t find exceptions useful, I wouldn''t have spent way too large a proportion of my life on this thread I just believe assertions have a small but useful place in software development too. Some experts agree, some do not.

quote:
Since exceptions can do everything assertions can do and better and more , why use assertions?


When asm can do everything that C++ can do and more, why use C++?

Ease of use. Conciseness. Better integration with development tools.

quote:
I did disagree with garbage collection on principle. I still do. Java, even apart from all of its other shortcomings, is incredibly slow. I also hear (from major computing magazines and Java references) that it''s a real pain getting any decent program written in Java to run on every platform. I disagree with Java''s methodology.

I have already expressed my belief that it is worth sacrificing processor time for programmer time, and that is a large part of the responsibility of garbage collection. Except for cyclic references (most of which a good implementation will address, I expect), it is impossible for a program in a garbage-collected language that disallows globals to have a memory leak, since as soon as something is no longer used, it''s gone. This is no more a negative reflection on a programmer''s skills than is the use of a stack based variable. Would you have them decrement the stack pointer manually to explicitly deallocate such variables? Why not extend that to heap allocated variables?

quote:Why would anyone want to use a VM for regular software? As a scripting language for web pages, it may be valid, but for commercial programs? Anybody who thinks a full VM can compete with native software is: 1) ignorant, or 2) out of his mind.


With all due respect, I feel you are the ignorant one in this case, and I don''t mean that in an insulting way.

Both Unreal and Unreal Tournament run on Virtual Machines. The last I heard, they were commercial. They also successfully compete with ''native'' software. I also believe Quake 3 uses a virtual machine, but I cannot verify this. Maybe you should research this a little more…

I personally think that platform independent code in a custom language is going to become more and more popular as portability and user-mods gain importance. Less and less of the game is going to be done in native code, and more in languages ''closer to the problem domain'', whether they are simple scripting languages or compiled bytecode.

quote:Why abstract away memory management? It''s an essential part of the C++ language, and it should be second nature to any good C++ programmer to know everything that could happen between new() and delete().


Why abstract anything away? For a cleaner interface, to reduce repetition, to minimise the chance of making errors, etc etc. If I often had to pass a given variable to 10 different ''if'' statements, it would also make sense to encapsulate those 10 rather than repeating them inline every time. In C++, you must know memory management, but that shouldn''t mean you explicitly have to micromanage every byte yourself.

As for not posting any source code, well I don''t feel any of my points need it. I could give you a long winded demonstration of how an auto_ptr and a factory method and some exceptions could demonstrate, in one example, all the points I have made, but that would be longwinded, and if I had that much time, I''d be working on real code.
quote:Original post by Kylotan

quote:
--------------------------------------------------------------------------------
Original post by null_pointer

Actually, the phrase I used was "I don''t understand why it would be necessary to create an object on the heap in a function and then return it..." Constructors don''t have return values...
--------------------------------------------------------------------------------

Check your post: you omitted the word ''it'' from the end, hence the confusion

However, the factory method I posted is a good example of a non-constructor function that does this.


No, you please check the post.

You''ll find that it hasn''t been edited since before you posted your first reply talking about the constructor. The word "it" is most certainly on the end and has been. (I can''t believe we''re arguing about something like this.)


quote:Original post by Kylotan

I''m sure that an experienced asm programmer could go through some of your compiled C++ code, even post-optimization, and find some redundant operations in there. The performance hit might be minimal, but it will be present. This is the negative effect on performance you get in ''encapsulating'' assembly code in a high level language.


I don''t see why, but I don''t know much Intel assembly. I should think that most of that negative effect on performance is caused by the compiler; compilers are getting better and will continue to improve.


quote:Original post by Kylotan

Perhaps you are expecting every programmer to be perfect. No programmer will ever be perfect.


No, but new() and delete() are simple and you claim that knowledge of them is necessary before using auto_ptr. Are you also implying that it is faster to learn auto_ptr instead of new() and delete()?


quote:Original post by Kylotan

If you want to be pedantic, simply using a function call rather than having the code inline incurs a performance hit due to having to push parameters etc. So your ''stretched'' code will always be more efficient, no matter how you do it. You -have- to accept loss of performance to some degree as part of making a cleaner design.


I don''t believe so, but then again I don''t quite understand what you are saying. My ''stretched'' code is merely for evaluation purposes; I never meant it to compile. I was not comparing the ''stretched'' code of auto_ptr to the actual code of ''auto_ptr''; I was comparing the ''stretched'' code of normal new() and delete() to the ''stretched'' code of auto_ptr.


quote:Original post by Kylotan

Maybe your problem didn''t exist. But it is just as bad to fill an abstract base class with hundreds of functions that apply to the subclasses, knowing that most of those functions don''t do anything meaningful for most of the children. There is a tradeoff to be made.


What are you talking about? hundreds of methods in the base class? I don''t think you understand; you don''t implement a whole hierarchy of window and window-derived classes unless the derived classes contain specific functionality not found in and not applicable to the root base class. For example:


class window
{
public:
window();
virtual ~window();

virtual void move(int, int);
virtual void resize(int, int);
};

class toolbar
: public window
{
public:
virtual void dock(enum);
virtual void undock();
};

class button
: public button
{
public:
virtual void press(enum, int, int);
};



etc. The whole point of making a set of derived classes is to build a hierarchy of GUI controls, that have some common base (window methods) but also their own distinct functionatity.


quote:Original post by Kylotan

quote:
--------------------------------------------------------------------------------

Then what is the correct problem? It is the simple fact that the user must generate his own code for creating and destroying his own hierarchy.
--------------------------------------------------------------------------------

This is what the Factory method I described above encapsulates, so you don''t have to keep rewriting the code to instantiate subclasses. It is a valid and well-documented use for allocating memory on the heap and yet not being responsible for its deletion.


Ah...you misunderstood me. I mean generating his own code for creating and destroying the classes he wishes to use, as in:


class my_button // user-defined class
: public button // abstract base class
{}; // blank for example

class my_frame // user-defined class
: public frame // abstract base class
{
public:
frame();
virtual ~frame();

protected:
// button* minimize_button; -- in base class
};

my_frame::my_frame() // user-derived from abstract base frame
: frame(),
minimize_button(new my_button), // data member
{

}

my_frame::~my_frame()
{
if( minimize_button == NULL )
{
delete minimize_button;
minimize_button = NULL;
}
}



The factory method is just a work-around for that. The Factory method just generates more code that is able to create an instance of a class from some kind of an ID, and leaves the base class code to handle it. In doing so it seems to shift the responsibility needlessly to the programmer of the abstract classes, who has no need of knowing the type of the derived classes. That is the whole purpose of using abstract classes -- you do not need to know the type, only whether a given class is derived from the abstract base class. Then why should you need to know the type when loading the classes? That was my point -- you don''t. The user doesn''t need something like CoCreateInstance(typeid(my_button)) or some other method that simply new()''s a type.


quote:Original post by Kylotan

I am not responsible for poor coding on Microsoft''s part The OS should free all its memory belonging to a process once it terminates, whether the program was coded badly or not. And the GUI should not share memory space with the kernel code, but oh well!

MFC is released code. For them to have assertions inside it is not something I agree with.


1) MFC comes in two versions, release and debug, and variations of those for different applications...kind of like the run-time libraries and their different libs. Only the debug version has assertions.

2) That is -exactly- why assertions are bad -- they are of no use for debugging across software layers, which is becoming more common today, and will become even more common in the future. Software size (both source and compiled code) is increasing, and the benefits of modularity seem only to increase with the size of the program.


quote:Original post by Kylotan

I don''t think anyone was advocating releasing libraries to the public with assertions still enabled (presumably because the library is still buggy?!) Similarly I expect Eiffel doesn''t have assertions in code that you are never going to see, however it encourages them as a way to enforce correctness in client code.


1) It depends on what "the public" in your statement meant. As I said earlier, when you release a library, you typically give out 2 versions: release and debug. The debug version has extra features that are useful to the programmers that build software using the library, but those extra features are useless to the end user. If assertions are used in the debug build of the library, you -will- encounter them if you write buggy software (which you say everyone does at some time in their life).

2) The question was not whether or not Eiffel has assertions in code that you will never see; the question was about how in the world assertions comply with OOP and encapsulation and what-have-you when dealing with different software layers. No one has yet answered me.

3) Enforce correctness in client code? I thought you said server code should not have assertions in it? How could the server code enforce correctness in client code using assertions if the server code does not contain assertions? Further, how do assertions accomplish this task? It would require giving out the source code for the server code so that you could gain the only advantage of assertions: looking at stack variables using the debugger. And how will the client code clean up? What if the client code uses other servers? How will they be cleaned up? What if they use other servers yet? There are just too many problems with this whole idea.


quote:Original post by Kylotan

quote:
--------------------------------------------------------------------------------
3) You can easily set up exceptions to bring up a dialog box in Windows, or a message in any other app. I did -- it easily reported the values of 6 variables, the function location, the type, a LONG message description, the class, the label, and the operation being performed.
--------------------------------------------------------------------------------

What about variables declared inside your try block? Are they not deallocated by the time your catch block executes? Are you advocating no allocation of memory inside try blocks?

And what about the function that called that function? What were its variables? Or the function that called that? How much work would you have to go to in order to see that information using exceptions? And in an easily manipulable form?

The answer is, it''s not worth bothering simulating that yourself. That is what the debugger is for. Whether you are using the nice one that comes with Visual C++ or GDB for Unix, they are very very handy and already provided for you.


1) When you save the variables into the exception object, you copy by value. So it doesn''t matter where the variables are as they are saved from the calling code, which is -in- the try block (it has to be in the try block to throw()). Look at this example:


void some_class::some_function(bool condition)
{
try {
int x, y, z;

if( condition )
{
throw( exception("some_class", "some_function", "", "I''m testing
the use of the function on variables allocated inside a try block.",
"Variable Test", x, y, z) );
}
}

catch( exception e )
{
e.display(); // displays a message box, can stop here

throw; // re-throw to another catch()
}
}



2) How do you get to the information in calling functions from inside called functions when an assertion is called? You hit the break button on the compiler. Do the same when the message box is up in a catch() statement. You control where the message box pops up. You could just as easily call display() before throwing the function, so that you can break inside the try() block.

3) It uses the debugger, but can also provide quick output of up to six variables (logging in fullscreen DX, anyone?). Whatever you choose.


quote:Original post by Kylotan

After numerous stack-based (and pseudo_stack based) variables have already been deallocated and are now unviewable, sure.


I told you before. You can just as easily display the exception before you throw it, and break into your code from there.


quote:Original post by Kylotan

When asm can do everything that C++ can do and more, why use C++?

Ease of use. Conciseness. Better integration with development tools.


1) Um...why can''t exceptions be integrated with development tools?

2) Why aren''t exceptions easy to use and concise?


quote:Original post by Kylotan

quote:
--------------------------------------------------------------------------------

Why would anyone want to use a VM for regular software? As a scripting language for web pages, it may be valid, but for commercial programs? Anybody who thinks a full VM can compete with native software is: 1) ignorant, or 2) out of his mind.
--------------------------------------------------------------------------------

With all due respect, I feel you are the ignorant one in this case, and I don''t mean that in an insulting way.

Both Unreal and Unreal Tournament run on Virtual Machines. The last I heard, they were commercial. They also successfully compete with ''native'' software. I also believe Quake 3 uses a virtual machine, but I cannot verify this. Maybe you should research this a little more...

I personally think that platform independent code in a custom language is going to become more and more popular as portability and user-mods gain importance. Less and less of the game is going to be done in native code, and more in languages ''closer to the problem domain'', whether they are simple scripting languages or compiled bytecode.


Ah...no, I don''t think so. But then if I am ignorant how would I know I am ignorant?

1) I put the rant="" attribute (or whatever the thing is called) in my post, so it doesn''t have to be intelligent or correct. (that''s a hint to people who rant)

2) Do we both understand correctly what a Virtual Machine (VM) is? I think not. So I will explain what I know (do be prepared, it can be boring) in the hopes that we will clarify any mistaken concepts we may have.


    HOW A CPU WORKS

    CPUs have several different types of registers, which are somewhat like hardware "variables", so to speak. They include: the code register, the data registers, and the flag registers. The code register stores the instruction to be executed. The data registers store the variables, if any, that are used with the instruction stored in the code register. The flag registers are like a "state" of the CPU, in that they are very general purpose and can do things like indicating options for the current instruction, the result of the current instruction upon the data, etc. Think of the flag registers like the little notation you use when doing multi-digit multiplication: you put a little one over the top of the next number to indicate you are carrying a one. The flag registers hold things like carry, overflow, etc.

    General program flow typically involves loading in the next instruction, loading in the data operated upon, setting the flags, and performing the operation.

    The instructions are much like the statements and keywords in C++. There are typically instructions for breaking, jumping, conditional branching, integer math, binary math, floating-point math, etc. These instructions are typically in the form of opcodes, or operation codes. The opcode is a special value that determines which instruction is to be carried out; the values of the opcodes are typically fixed in the CPU.

    The opcodes are usually given labels, syntax, and explanation, and this is what is called an assembly language. Since assembly language is basically a definition of the use of the opcodes, assembly is very close to the CPU and is non-portable.

    CPUs operate on data stored in memory via their data bus. That means, you need external memory for a CPU to operate upon, to store code and data. This memory is called RAM. CPUs can also operate on ROM, but need RAM to use at least as storage.

    A typical computer is comprised of a CPU, some RAM, devices needed by the user, and the motherboard which coordinates everything; handles power; and initializes the CPU, RAM, and devices.


    HOW A VIRTUAL MACHINE WORKS

    Virtual Machines work in much the same way. They model an ideal (or common) CPU. Sometimes the model is very simple. Also modeled are the instructions carried out on a typical CPU. These are called bytecodes.

    Virtual Machines also provide memory management and an API to use. The API requires the creation of a new language, which is oftentimes called the scripting language (in combination with the API). The scripting language can be very simple, but essentially it is usually a(n) assembly/C hybrid clone that provides basic expressions, keywords, data types, functions, etc.

    The only real problem with VMs is that the CPU/memory model doesn''t exist. At best you will be operating at the efficiency of the target CPU. At worst you could be...well...let''s just say very, very slow. You see, the VM interpreter (or CPU) does its best to match its virtual bytecodes to the actual CPU''s opcodes, but all CPUs are built differently. You will get a definite performance hit for being this generic at such a low level. Not only do the virtual bytecodes not match the actual opcodes, but software must drive and evaluate the CPU and the code, manage the cache, the memory, etc. If the target machine doesn''t turn out to be exactly the same as the VM, you get a big performance hit. Not only is the VM CPU model inefficient by definition, but the VM itself requires extra resources to emulate another computer.



By sheer obviousness, we can observe that software written in all native code will run faster on the same hardware than software written using a VM.

Since a VM is not the most efficient way of doing things, there are usually big justifications for building it. Here are two reasons: 1) a custom game "engine", and 2) portability. (I was ranting about Java (portability), and the validity of VMs in general.)

Why do I think VMs are needless? Well, for portability, C++ classes, virtual functions, and DLLs provide all the flexibility you could ever need. All you do is make an abstract class, and some derived classes to handle the implementation. Here is what I mean:


class engine
{
public:
virtual void initialize() = 0;
virtual void loop() = 0;
virtual void uninitialize() = 0;
};

class UNIX
: public engine
{
public:
virtual void initialize() {};
virtual void loop() {};
virtual void uninitialize() {};

class windows
: public engine
{
public:
virtual void initialize() {};
virtual void loop() {};
virtual void uninitialize() {};
};



Put the native code for UNIX in the UNIX member functions, and put the native code for windows in the windows member functions. Put the derived classes in DLLs, and make them return an engine*, and BAM! instant portability. Just swap the DLLs when you want to change engines. Or let the user choose. When you want to compile for a new target platform, you just re-compile the game code. And it''s all compiled into native code.

Why do you need to make a VM for portability? You don''t. The virtual CPU, bytecodes, and language are merely replacing what a C++ compiler does; a C++ compiler already turns a portable language into machine-specific code. The VM API is merely replacing what classes in C++ already provide: a portable interface. I say it''s time to stop re-inventing the wheel!


OK, now for the custom game engine reason. *rolls up sleeves* Why do you need a VM? It only makes the programmers who use the engine learn yet another language, decreases performance over native code, and provides countless hours of hard work.

And after all that hard work, if the VM and its code ever reaches the level of performance that C++ could easily provide, it would still be wasted effort. The useless megabytes of source; the countless extra hours spent by both VM writer and game programmer alike; the pointless extra tools created; and all of it done to create a "structure" or "methodology" that could have been created with a little typing and a few mouse-clicks with a C++ compiler/IDE. Oh joy of joys. And to top off the months of hard work, the programmers are rewarded for their work with praise and big salaries. Wow. I wish I could be so dense; maybe then I could afford to upgrade my modem. But I''m afraid that I''m too smart...


3) Now, could you please explain to me why VMs have any place in software today?


quote:Original post by Kylotan

I personally think that platform independent code in a custom language is going to become more and more popular as portability and user-mods gain importance. Less and less of the game is going to be done in native code, and more in languages ''closer to the problem domain'', whether they are simple scripting languages or compiled bytecode.


It''s nice to dream...

Personally, I think the trend is more toward generic programming languages that the compiler/profiler optimizes and turns into machine-specific code. (Last I heard, that''s what compilers do.) And I have no idea how custom languages facilitate portability more than generic languages like C and C++. Languages provide an interface to code; APIs provide an interface to specific functionality using the language. I don''t think confusing the two will help programmers do anything but make a mess of their code (BTW no one needs help with messing up their code, especially me!).

If you want to get technical about the whole thing, the point is modularity. Separate the programmer''s needs from the computer''s optimization from the user''s needs, and you''ve got: 1) a language, 2) a compiler, and 3) an API. Throwing them all together into one disorganized mess called a VM is not to progress but to regress...

To put it bluntly, you shouldn''t have to create a new computer for every piece of software that you make. That''s a bit backwards...



- null_pointer
Sabre Multimedia
quote:
I don't see why, but I don't know much Intel assembly. I should think that most of that negative effect on performance is caused by the compiler; compilers are getting better and will continue to improve.


And wrapper classes continue to improve too. You have to trade off some performance for other features. Most programmers consider trading off 5% processor time for a significant amount of programmer time to be a good choice. Strangely enough, so do project leaders and managers...

quote:
Are you also implying that it is faster to learn auto_ptr instead of new() and delete()?


No, but it's quicker and clearer to make certain parts of your program safe with auto_ptr than with new and delete.

quote:
I don't believe so, but then again I don't quite understand what you are saying. My 'stretched' code is merely for evaluation purposes; I never meant it to compile. I was not comparing the 'stretched' code of auto_ptr to the actual code of 'auto_ptr'; I was comparing the 'stretched' code of normal new() and delete() to the 'stretched' code of auto_ptr.



The point is that although the auto_ptr solution may be longer in stretched code, it is shorter and simpler in client code. You lose some performance, and gain some clarity. If you do not want to make that tradeoff, that is your decision, but many (most) do. That is why we use C++ and not assembly for everything.

quote:
What are you talking about? hundreds of methods in the base class? I don't think you understand; you don't implement a whole hierarchy of window and window-derived classes unless the derived classes contain specific functionality not found in and not applicable to the root base class.


You were implying that everything would be accessed through virtual functions in the base class rather than by downcasting to the derived class. I think your original post on this was not very clear so I'm sorry for the misunderstanding.

quote:
That is the whole purpose of using abstract classes -- you do not need to know the type, only whether a given class is derived from the abstract base class. Then why should you need to know the type when loading the classes?


I am probably missing your point, but -something- has to know what type it is. Otherwise you cannot allocate the memory and call the correct constructor.

quote:
1) MFC comes in two versions, release and debug, and variations of those for different applications...kind of like the run-time libraries and their different libs. Only the debug version has assertions.


There's a distinction between my own project in debug mode (ie. I want it to give more information about what is wrong with it so I can fix it) and a debug library (it should give more information about what it's doing so that I can fix the way my code uses it). If MFC uses asserts to check that I passed it a valid pointer, I consider that an abuse of assertions. Assertions should be there to check invariants, not to try and catch the user. If it gets an invalid parameter, it should throw an exception or return an appropriate failure code.

In properly written code, you know exactly where your classes will interface with other 'layers' and assertions shouldn't be made to check for valid input. However assertions within your classes checking that -your- logic is correct are a useful tool during development. They also do not come with the overhead of exceptions, and -should- be removed when you distribute your application/library/whatever.

quote:
1) It depends on what "the public" in your statement meant. As I said earlier, when you release a library, you typically give out 2 versions: release and debug. The debug version has extra features that are useful to the programmers that build software using the library, but those extra features are useless to the end user. If assertions are used in the debug build of the library, you -will- encounter them if you write buggy software (which you say everyone does at some time in their life).


I would consider that a defect of the library, and not just because it uses assertions. Assertions are there to help development of the library, not to enforce its proper use. Assertions should be gone by the time an end user gets to touch your code.

quote:
2) The question was not whether or not Eiffel has assertions in code that you will never see; the question was about how in the world assertions comply with OOP and encapsulation and what-have-you when dealing with different software layers. No one has yet answered me.

Because you are expecting everyone's use of assertions to match the (apparently) foolish method used in MFC, perhaps? Admittedly, that is partly down to the limitations of C++.

Now, I am no Eiffel expert, but I believe that it uses preconditions and postconditions for function calls, etc. This means it will tell you if you tried to call a function with an invalid parameter. Consider this an extension on type-checking: not only does it check the correct types, but it checks the data is valid for the target. This mechanism enforces any documentation that comes with your library on the parameters to pass it. It does not break encapsulation, you do not get dumped into the libraries source code. They enforce the proper interface between 2 classes.

In fact, failing to satisfy an assertion in Eiffel generates an exception. It is just a cleaner and shorter way of doing such a thing.

quote:
3) Enforce correctness in client code? I thought you said server code should not have assertions in it?


Eiffel's use of assertions is better than that of C++, and that was the context in which I used the above sentence.

quote:
1) When you save the variables into the exception object, you copy by value. So it doesn't matter where the variables are as they are saved from the calling code, which is -in- the try block (it has to be in the try block to throw()).


Many bugs are down to obscure low-level errors, such as going over an array's bounds, or a pointer to the wrong place. To call copy constructors on these potentially defective objects (after all, your program has just done something 'wrong') is not always going to work, when it does it is not always going to be meaningful, and it is not always going to show you the problem.

You are also requiring a heavy handed approach to debugging: grab every variable we could possibly need to check and throw it down. How is this much better than the old routine in Basic of polluting the program with a load of 'print' statements? How much work do you have to go to for that? It looks like an extremely unwieldy exception class, too. How would you implement that? Variable argument list? Or have to redefine the exception type each time you felt like checking different variables? Or every time you add a new variable to your routine that throws the exception?

quote:
2) How do you get to the information in calling functions from inside called functions when an assertion is called? You hit the break button on the compiler. Do the same when the message box is up in a catch() statement.


Again, by that point several automatic variables have been deallocated, and merely copying them is not always sufficient. You are also suggesting that every function which can 'throw' has its own 'catch' block, which is not really the case, nor should it need to be. A 3rd, platform dependent, argument, is that I run programs outside of the debugger and only run the debugger once it has crashed/failed an assertion, by clicking the 'debug' button that appears. That does not appear with exceptions.

You control where the message box pops up. You could just as easily call display() before throwing the function, so that you can break inside the try() block.

It's a lot of code and effort compared to assert(a != 0). And doesn't gain you very much. Cleanup, maybe, but I've never found that to be a problem during debugging.

quote:
2) Why aren't exceptions easy to use and concise?


Because they take a lot more explicit code to achieve simple things.

assert (a != 0);   

is much more concise than
try{    if (a == 0)        throw NullA;}catch (NullA){    report (NullA);}   


quote:
-------------------
I personally think that platform independent code in a custom language is going to become more and more popular as portability and user-mods gain importance. Less and less of the game is going to be done in native code, and more in languages 'closer to the problem domain', whether they are simple scripting languages or compiled bytecode.
------------------

Ah...no, I don't think so. But then if I am ignorant how would I know I am ignorant?



If you do not think so, then you are not following the trends.

quote:
2) Do we both understand correctly what a Virtual Machine (VM) is? I think not. So I will explain what I know (do be prepared, it can be boring) in the hopes that we will clarify any mistaken concepts we may have.

=SNIPPED DETAILS=

The only real problem with VMs is that the CPU/memory model doesn't exist. At best you will be operating at the efficiency of the target CPU. At worst you could be...well...let's just say very, very slow. You see, the VM interpreter (or CPU) does its best to match its virtual bytecodes to the actual CPU's opcodes, but all CPUs are built differently. You will get a definite performance hit for being this generic at such a low level. Not only do the virtual bytecodes not match the actual opcodes, but software must drive and evaluate the CPU and the code, manage the cache, the memory, etc. If the target machine doesn't turn out to be exactly the same as the VM, you get a big performance hit. Not only is the VM CPU model inefficient by definition, but the VM itself requires extra resources to emulate another computer.


The fact is, you don't have to get a big performance hit. A noticable one, perhaps. But in many cases, an unnoticable one.

quote:
By sheer obviousness, we can observe that software written in all native code will run faster on the same hardware than software written using a VM.


Yes, and will often take an order of magnitude less time to write. On exactly the same token, software written in C++ does not run as fast as software written in assembly for exactly the same reasons: C++ is generic and does not take advantage of very specific asm instructions. You lose performance to gain in coding time and structure.

quote:
Why do I think VMs are needless? Well, for portability, C++ classes, virtual functions, and DLLs provide all the flexibility you could ever need.


It is also like giving power tools to a 4 year old. Once past the low level details, you only need a subset of the functions to actually make the game. The rest can be implemented in something higher level, where you trade off a little performance for more productivity, fewer bugs, and a smaller, more well defined interface. You also do not have to own different compilers for every platform you are aiming at, since the VM is already there. Nor worry about functions that are not present on certain platforms, or require different syntax on different platforms. You can create a slimlined interface that inreases productivity.

quote:
Why do you need to make a VM for portability? You don't. The virtual CPU, bytecodes, and language are merely replacing what a C++ compiler does; a C++ compiler already turns a portable language into machine-specific code.


C++ is not a perfect language. It is also general purpose. It is not friendly to non-programmers. Given a knowledge of what you need, you can create a VM and language that are nearer to being perfect for your task, more specific, and easier for non-programmers to work with. Arguing that C++ does everything you need is like saying Visual Basic is not needed, since you can do it all in C++. Visual Basic offers several advantages over C++ in certain instances: simplified memory management being the main one. More readable code to a new programmer is another. These are two benefits that a new application-specific language can (and usually does) bring.

quote:
OK, now for the custom game engine reason. *rolls up sleeves* Why do you need a VM? It only makes the programmers who use the engine learn yet another language,


The languages are usually designed to be more intuitive and closer to the game than a generic language can be. Thus, they are quicker to learn. A good programmer will be able to learn a new language quickly anyway. A non-programmer (who these languages are often also aimed at) will learn a simplified language that is directly relevant to the game engine much more quickly than they will learn C++.

quote:decreases performance over native code,


Using RTTI or exceptions also incur performance decreases, but you like those features. You have to pay for what you want. However usually the cost is so minimal that it is worth it.

quote:and provides countless hours of hard work.

Work done once to reduce the amount of work done later, whether in developing add-ons, or debugging.

quote:it would still be wasted effort. The useless megabytes of source; the countless extra hours spent by both VM writer and game programmer alike;


The countless hours saved by being able to have your designer 'program' the game levels rather than them having to come and ask you to code in yet another custom feature...

quote:
-------------------
I personally think that platform independent code in a custom language is going to become more and more popular as portability and user-mods gain importance. Less and less of the game is going to be done in native code, and more in languages 'closer to the problem domain', whether they are simple scripting languages or compiled bytecode.
-------------------
It's nice to dream...


No, this is happening. Baldur's Gate featured LUA for a part of it's AI. The Unreal series runs on UnrealScript, a compiled Java-like bytecode that runs on an VM (and this is not just game logic, it goes as low level as textures and vertex management too). Quake 3 has its own language, it seems, although it is almost entirely C-like apparently. Jedi Knight had its own language and you can read about it on gamasutra somewhere.

If you want to deny what is -actually- happening, that is fine. But the real, observed trend is -away- from native code and towards scripting languages and engine-specific languages.

quote:
If you want to get technical about the whole thing, the point is modularity. Separate the programmer's needs from the computer's optimization from the user's needs, and you've got: 1) a language, 2) a compiler, and 3) an API. Throwing them all together into one disorganized mess called a VM is not to progress but to regress...


Why should an API differ from the language? BASIC has worked just fine without needing to make such a distinction. You call functions to do things, you don't need to know where they came from. And why should we need an explicit compiler? These are just added complications.

quote:
To put it bluntly, you shouldn't have to create a new computer for every piece of software that you make. That's a bit backwards...


The idea is generally that you don't. You create a better system once, which can then be used repeatedly in future applications, enhanced by programmers and non-programmers alike. As a library writer yourself, you should appreciate that whereas you are just duplicating the efforts of what has gone before, the idea is to do it well enough such that it will be useful for many future applications.

(Nested quoting has stopped working. Grr.)

Edited by - Kylotan on May 29, 2000 11:34:30 AM

Edited by - Kylotan on May 29, 2000 11:38:18 AM

This topic is closed to new replies.

Advertisement