Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

null_pointer

Exceptions...

This topic is 6928 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

quote:
Original post by Joviex

Labels are just that, nothing more.



Well, first you are defining labels from your definition of labels, from your definition of labels, from your... Circular reasoning is a sure sign of improper logic.

First, using the definition of the word exception, from which the label comes, is a rather obvious check to see whether or not you are using exceptions according to their original purpose. If you are doing something that doesn't even fit the description of exception, something's wrong, and there are two possibilities: 1) the label is not accurately describing its function, or 2) what you are trying to do could be done better another way. I would say 2 is correct, simply by comparing the amount of time that the creators of the language spent working on exceptions and the amount of time most programmers actually spend studying them (5 minutes in the docs?). Also, it would seem very likely that the creators of the language should know what exceptions do and would have labeled them accurately. Their JOB is to create labels that turn eventually into machine code!


quote:
Original post by Joviex

Explain to me how "computer" describes both the physical object and what it can do. Wait a minute, describe it without labels. They are an end run into the human need to externalize the visible world.



You're using labels to explain about labels...how ironic.

Hmm...you've run smack into what humans call language. They create language to describe things and to communicate with each other. I don't know what you intend to say, other than nothing is absolute and everything must be experienced blah blah blah... If nothing is absolute, what can you experience? Experience requires both the absolute and the relative viewpoint. If you have only relative viewpoint, you have nothing to view.

But regardless, humans attempt to define what they see. When they accurately define what really exists, then the language becomes better. In other words, it's a progression from worse to better (ideally). If you can't understand these labels, I really can't help you...


quote:
Original post by Joviex

Honestly, is it a function of a flashlight to generate light? or is it the function of a light switch to connect the circut which enables the battery to fire some protons from one end to the other, thus generating a force, which in turn excites the cathode in the bulb to emit some photons??

...

You see, by your generalization, you have encapsulted the world, the universe even, into one huge function.

Universe(google energy, google mass) {};~Universe() { sprintf("Damnit!@"); }



(Nouns are classes, verbs are functions, and properties are data members. Objects are classes with the same functions but individual properties. You can't have an object without a class.)

No, the universe is an object. A rather big, complex object, but an object nonetheless. Also, if you understood anything about objects, you would understand what to call a flashlight. You see, objects are built out of subobjects, and those subobjects are built out of subobjects, etc. down to atoms and beyond. Humanity has yet to see an end to this. Objects have properties, and they contain subobjects that work together. Here is the properly modeled flashlight class, as you have kindly described it, in C++:


class Flashlight
{
public:
class Switch
{
public:
class Battery
{
public:
class Proton
{
public:
class Force()
{
public:
Force(int nAmount) { m_nAmount = nAmount; }

int GetAmount() { return m_nAmount; }

protected:
int m_nAmount;
}; // end class Force

public:
Force Fire();
}; // end class Proton

class Photon
{
public:
Photon(Force force) { m_nLight = force.GetAmount(); }

protected:
int m_nLight;
};

Battery( int nProtons ) { m_pProtons = new Proton[nProtons]; m_nTotal = nProtons; m_nUsed = 0; }

Photon Use() { m_nUsed++; return Photon( m_pProtons[m_nUsed].Fire() ); }

Recharge(int m_nTime) { m_nUsed += m_nTime; if( m_nUsed > m_nTotal ) m_nUsed = m_nTotal; }

int GetChargeLeft( return( m_nTotal - m_nUsed ); }

protected:
Proton* m_pProtons;
int m_nTotal;
int m_nUsed;
}; // end class Battery

class Circuit
{
public:
Circuit() { m_bBroken = true; }

Connect() { m_bBroken = true; }
Disconnect() { m_bBroken = false; }

protected:
// runs concurrently until batteries run out
// stops draining batteries if the circuit is broken
GenerateLight() { for(int x=0; x < 2; ) { while( m_Batteries[x].GetChargeLeft() ) { if( m_bBroken != true ) { m_Batteries[x].Use(); x++; } } } }

protected:
bool m_bBroken;
Battery m_Batteries[2];
}; // end class Circuit

public:
Switch() { m_bOn = false; }

Flick(bool bOn) { (bOn == true) ? m_Circuit.Restore() : m_Circuit.Break(); )

protected:
Circuit m_Circuit;

}; // end class Switch

}; // end class Flashlight



As you can see, it is NOT the function of the Flashlight to generate Light. It is the purpose of the flashlight. Don't confuse the purpose of an object with its interface.

One nice, neat package. And when you want to use a flashlight, you go out and buy one, and flick the switch. You don't go out and get all the parts and assemble one yourself. You certainly do not have to understand how it works to use it. You may replace the batteries, but only because that was intended (meaning built into the interface) -- you certainly don't have to break open the case and tape them in. Batteries were provided to be replaced, and perhaps recharged. (Batteries are a modular form of power, are they not?) You are confusing the implementation of the object with its interface, but even then you still don't understand about subobjects.

Everything, at root, is not a function but an object, and that is how the human brain thinks logically. To emulate objects with global functions, you must envision a vague "object" and assign functions to it, whether you realise it or not.

BTW, just a thought question that is comparatively unrelated -- so you shouldn't answer it here. "If the universe is an object, who instantiated it? For that matter, who designed its class?"


quote:
Original post by Joviex

Also, don't think that logic is an end that justifies the means. If that were the case, people would never pusht he envelope on higher thought.



God forbid! He certainly did.

On superficial examination, "logic" is not an end that justifies the means, but a goal. If you wish to seek "logic," then you must use the means. Why do you assign a bad connotation to "means"? Let's look more closely at what lies behind this.

If you are talking about "higher thought" as something vague (read: indescribable), then it really has no use here. Vague things are of no use as goals.

What I define as higher thought is something that is obtained by strict application of already learned principles. In other words, you put basic facts together and adopt the conclusion, which are more facts if the logic was correct. In that case, logic is the means of "higher thought."

I seek better logic, not as an end in itself, but as a means to a greater purpose. Programming. Logic is of no use as an end, only as a means. You must look at the whole picture, like you would a strategy game. There can be more than one means and one end, and they are paired and then grouped into levels of priority based solely upon their inherent dependencies. You need logic if you are going to do anything right! Proper logic is a synonym for the peak efficiency of the human brain.

Further, all people start with some idea of logic, but it is crude and simple and good only for grasping the basic facts. Look at children. First, they imitate. Then, their natural curiosity takes over and they use their simple logic to discover why other people are doing the things that they themselves are imitating. In simpler terms, they begin to reason. It takes a while for a child to grasp ideas that seem easily obtainable to you. All other things equal, it is because of a difference in logic. Logic determines how efficiently you can harness your raw intelligence to perform a particular task. Thus, synonyms for logical and illogical would be rational and irrational. However, rationality involves logic, and irrationality involves the illogical. Rationalizing is the process of turning what you perceive as the illogical into the logical by ignoring your presuppositions. Rationalizing is always done with a distinct purpose that actually, if closely examined, is in direct conflict with the presuppositions.

However, logic doesn't get "better" by nature. People become "better" and "worse," depending upon their adherance to their presuppositions, and of course the correctness of their presupositions. In simpler terms, how they interpret their experiences determines whether their capacity to reason correctly grows or shrinks. Presuppositions are the viewpoint; logic is merely the ability to see. Perhaps you've seen people degenerate into insanity? We say this is because the events were "just too much," but in reality we mean: "I feel sorry for the person; if I had to go through that, I might have done the same thing." Or, perhaps you've heard of people that had bad childhoods, and they use it as an excuse to murder and kill? How many school shootings have you been alarmed at lately? People hate to admit that they are not perfect and are always in serious danger of becoming worse. So the rest of the world must be wrong. That is precisely the reasoning that I really find offensive; it's something everyone does with programming, even in the least little things. They don't use a language feature as it was meant to be used and when they encounter something they think it is a bug in the language or the compiler. If you examine beforehand how the language feature is to be used, it really minimizes the chances of falling into that trap. Same thing with APIs.



How did we get into discussing the origin of the universe from the topic of exceptions? Are exceptions really on the same level as fundamental beliefs?




- null_pointer
Sabre Multimedia


Edited by - null_pointer on 5/1/00 7:47:48 AM

Share this post


Link to post
Share on other sites
Advertisement
Somehow, yet again, I just cannot escape the lure of this thread. Something calls me, beckons me, to make just one more post on the philosophy of exception handling...

I use tools in a way that suits me If it looks like a hammer, acts like a hammer, and works like a hammer, I shall use it as a hammer. Someone with more knowledge than me may be able to point out that I am actually hammering in nails with a screwdriver But if this screwdriver is working better than a real hammer for these nails, I will continue using it in this way.

There is one place in some of my code where I use a single try/catch block to simplify error handling when loading in some complex file type. Basically, it has to pass several ''tests'' before the main data can be loaded in. Many people (especially Pascal programmers) would just do it this way:

if (condition1 == OK)
{
if (condition2 == OK)
{
if (condition3 == OK)
{
if (condition4 == OK)
{
// Do something
}
else
{
// Report error 4
return -4;
}
}
else
{
// Report error 3
return -3;
}
}
else
{
// report error 2
return -2;
}
}
else
{
// report error 1
return -1;
}
// Continue...

Which I find unwieldy Especially when resources are being allocated on the heap along the way. So, I think of exceptions, and instead of thinking "They are there to handle exceptional circumstances", I think "they are there to provide structured error-handling code and to ensure the destructors get called for my objects".
So I do something like this (watches null_pointer cringe ):

try
{
if (condition1 != OK)
throw exception("condition1");
if (condition2 != OK)
throw exception("condition2");
if (condition3 != OK)
throw exception("condition3");
if (condition4 != OK)
throw exception("condition4");
}
catch (exception ex)
{
ex.ShowMsg();
return -1;// or false, or something
}
// Do something


Now I find the 2nd example to be far more readable, and more effective in cleaning up my allocated resources, etc etc. It also handles the error as part of this function, rather than passing on a return value for something else to deal with. I consider this ''better'', even though you may say it is less ''logical'' and it may not go along with the ''wishes'' of the people who conceived of exceptions. (Although I believe Stroustrup is a believer in working and efficient code rather than ''pure'' code.)

quote:
As for using C functions as they are the closest model to the problem, they most certainly are not. Find a function not belonging to an object in real life, and I will concede that point.


Sorry, what I meant was that, within that language (C), a standard function is as close to the problem domain as you can achieve. C++ improves upon that with several features such as overloading. However inheritance is just another tool: a very useful one, one I use a lot, but it''s not essential for good, or working, or logical applications.

Also... how come you say on the one hand, that we should model our programs as close to human thinking as possible, and to make programs totally logical, and yet on the other hand, insist that human thought is very illogical?

Logic is indeed subject to certain laws. But these 2 statements are fundamentally equivalent:
3 + 3 + 3 = 9
3 * 3 = 9
Both are valid and logical ways of storing the result of 3 threes. You could argue they are exactly the same, or you could argue that they are different. That doesn''t matter to me. What matters to me is that there was more than 1 way of getting the right inputs to the right outputs. What is inbetween is an encapsulated ''black box'' and as long as I can prove that it works every time, I don''t mind if I did it a different way to another programmer, or a different way to how the language designer intended.

Another point: we tend to follow Newtonian laws of physics: however, Einstein''s laws have shown there to be a slight degree of error in ''naive'' Newtonian physics. However, Newtonian physics is (the last time I checked, anyway) accurate enough for NASA etc. Here, we have an example of where one logic is perhaps better than another, but since the ''wrong'' logic is a tool that suits the purpose, it is used. And I don''t see anything wrong with that.

You also suggest that in order to ''grow'' one should become more logical. Perhaps true growth would come from becoming more able to work outside of logic, but that''s more philosophy than programming

Is there always a single ''best'' way to solve the problem at hand? You''re assuming a 100% knowledge of the problem, which is rarely held. Programmer time is a valid resource too: if it takes 5 years to do something the ''proper'' way and 5 hours to do it the ''working'' way that perhaps doesn''t look right, then you''ll have a hard time convincing people that they didn''t do the ''right'' thing

Which is ''right''?
int var = 48 / 4;
or
int var = 48 >> 2;
2 methods, 2 different tools for the same task, the same (correct) output. This is simply down to preference: do you prefer readable code or code that you don''t have to rely on the compiler to optimise? You can prove that both ways are correct, so it comes down to choice.

You mentioned that good code is cleaner and faster and more easily read than bad code: without an optimising compiler available, I assert that the above example breaks that rule. Choose speed, or choose readability. (Of course, if you want to claim that << is just as readable as /, then the point doesn''t work. But I''m sure some optimisation freak could find an even more obscure situation. Duff''s Device, anyone? )

quote:

Actually, I''d like to work on the same project some time. We definitely make each other think!

Heh. You handle the game logic, I''ll handle the game illogic.

quote:

Or do we just like to disagree? LOL

No


Oh hell, I may as well add a semi-reply to Joviex in here too

I would have to concur that I don''t believe there is always only 1 ''true'' way to model something, even sticking within the bounds of ''logic''. Perhaps there is one ''perfect'' system, but since you never have to model the whole system, you can sometimes combine several objects into one, or ignore the existence of an object altogether.

I will also say that I don''t believe logic is the only thing worth aspiring to. If you have perfect logic, you have mastered your art, but you cannot see beyond it. Historically, advancements have been held back because people who believed in ''conventional wisdom'' would suppress or ridicule anyone thinking too differently. Perhaps there is a ''better logic'' beyond the bounds of the currently held logic system? Who knows? I just like to push the boundaries

Share this post


Link to post
Share on other sites
quote:
Original post by Kylotan

Which I find unwieldy Especially when resources are being allocated on the heap along the way. So, I think of exceptions, and instead of thinking "They are there to handle exceptional circumstances", I think "they are there to provide structured error-handling code and to ensure the destructors get called for my objects".
So I do something like this (watches null_pointer cringe ):

try{
if (condition1 != OK)
throw exception("condition1");

if (condition2 != OK)
throw exception("condition2");

if (condition3 != OK)
throw exception("condition3");

if (condition4 != OK)
throw exception("condition4");
}

catch (exception ex){
ex.ShowMsg();
return -1;// or false, or something
}
// Do something



Why should I cringe? I would cringe at the first code snippet, but not this. If the file didn''t load correctly, it''s an exception. Does depend on the circumstances though. As I said before, if it was a data file, you should be aware that the user may have deleted the file. If it is a program file, you should throw an exception, and perhaps offer to run some kind of install and repair utility...

However, with the example from the quote, I would either re-throw the exception or throw a different one more descriptive to the user. Return values make poor error codes. Incidentally, that code is almost identical to code from my library, except that I''d use the type instead of a string mimic-ing the type.


quote:
Original post by Kylotan

I use tools in a way that suits me If it looks like a hammer, acts like a hammer, and works like a hammer, I shall use it as a hammer. Someone with more knowledge than me may be able to point out that I am actually hammering in nails with a screwdriver But if this screwdriver is working better than a real hammer for these nails, I will continue using it in this way.



You are missing something in your model though. Hammers are slightly easier to use for hammering nails. So, if you are using a screwdriver to hammer nails, then there must not be a hammer available. However, with C++ there is a plethora of tools, and I seriously believe that making one tool into a one-size-fits-all because of personal preference will hurt your code. Exceptions are not errors, although errors can be exceptions. There are far better ways to handle things if they are not exceptional. Exceptions do slow the code down somewhat, and they are definitely overkill for something that can be handled with better logic. That means preventing errors, not putting in mammoth if...else statements in place of exception-handling.



quote:
Original post by Kylotan

Now I find the 2nd example to be far more readable, and more effective in cleaning up my allocated resources, etc etc. It also handles the error as part of this function, rather than passing on a return value for something else to deal with. I consider this ''better'', even though you may say it is less ''logical'' and it may not go along with the ''wishes'' of the people who conceived of exceptions. (Although I believe Stroustrup is a believer in working and efficient code rather than ''pure'' code.)



You''re getting offended now. It''s still silly to think that the people who came up with the language would not know as much about exceptions as you do. They are not ruling the world like a pack of dictators; they created a language. You must learn to work with the language''s system of coding and not against it. Stop re-inventing the wheel and concentrate on writing the program.

Further, if I say it is less logical, I would most certainly be incorrect. There is only one logical, best solution to each problem. You can''t base your coding on personal preferences without hurting your coding ability.

(Also, you did pass a return value.)

I don''t care about what you call "pure" code; as I pointed out before, good code is faster, cleaner, easier to read, and correctly models real life. I don''t understand what problem you have with that. Unless you consider it to be only my personal preference? Certainly not. If I let my preferences have their way in my programming, it would be terrible. Everytime I change the way I code, I do it for a reason that is based on facts. Not according to how I like my if() brackets placed!


quote:
Original post by Kylotan

Also... how come you say on the one hand, that we should model our programs as close to human thinking as possible, and to make programs totally logical, and yet on the other hand, insist that human thought is very illogical?



Good question!

We should strive to do what''s correct, no matter what many people do. Even if I was the only coder on earth with this idea, I would still insist that we should code according to facts. People should adjust to what is logical, if they ever wish to become more logical. There is so much bad logic; I have it; everyone has it! If you want examples of this, consult the threads in a programming forum (such as this one). If you don''t understand what I''m talking about, it is the rules that deal with human language in general; that is, rules, statements, and definitions. I didn''t invent it. The scientific method uses logic. If you want to know what it is, check out a good book on proofs...


quote:
Original post by Kylotan

Logic is indeed subject to certain laws. But these 2 statements are fundamentally equivalent:
3 + 3 + 3 = 9
3 * 3 = 9
Both are valid and logical ways of storing the result of 3 threes. You could argue they are exactly the same, or you could argue that they are different. That doesn''t matter to me. What matters to me is that there was more than 1 way of getting the right inputs to the right outputs. What is inbetween is an encapsulated ''black box'' and as long as I can prove that it works every time, I don''t mind if I did it a different way to another programmer, or a different way to how the language designer intended.



They are not fundamentally equivalent; they are functionally equivalent. That is, they do the same thing. Function is only one of the parts of the task at hand. There is also purpose. In this case, purpose would dictate which notation you would use. If you were working with multiples of 3''s in your code, you would use the multiplication. If you were adding three separate numbers that all happen to be 3''s through coincidence, you would use addition. Either one will result in a 9.


quote:
Original post by Kylotan

You also suggest that in order to ''grow'' one should become more logical. Perhaps true growth would come from becoming more able to work outside of logic, but that''s more philosophy than programming



No, definitely not. Logic and presuppositions are the two parts to rational thought. The only alternative to rational thought is irrational thought, but I would advise against insanity.

Seriously, though, thought can be corrected in only those two parts. You can also have correct presuppositions but incorrect logic, and vice versa. Anyway, to derive one fact from another, you must have both your presuppositions and your logic correct. Otherwise, you will arrive at a non-truth or back where you started. Either way, you accomplish nothing worthwhile.


quote:
Original post by Kylotan

Is there always a single ''best'' way to solve the problem at hand? You''re assuming a 100% knowledge of the problem, which is rarely held. Programmer time is a valid resource too: if it takes 5 years to do something the ''proper'' way and 5 hours to do it the ''working'' way that perhaps doesn''t look right, then you''ll have a hard time convincing people that they didn''t do the ''right'' thing

Which is ''right''?
int var = 48 / 4;
or
int var = 48 >> 2;
2 methods, 2 different tools for the same task, the same (correct) output. This is simply down to preference: do you prefer readable code or code that you don''t have to rely on the compiler to optimise? You can prove that both ways are correct, so it comes down to choice.



Actually, you aren''t even talking about my definition of good code. I said "good code is...a logically correct model of the interaction between real life and the human brain," so it would obviously be easier to write. It''s closer to the ideal thinking.

You don''t need 100% knowledge of everything related to the problem; you just need to abstract the proper details.

Technically, there could be systems on which bit-shifts are slower. You just don''t know. The compiler should optimize for the specific platform. It should be written in the way that fits its purpose. If you are working with bit-manipulation in that function, you would (obviously) use the bit-shifts, because multiplication or division by powers of 2 wouldn''t make much sense. All you wish to do is to move the bits. The resulting value of the number is meaningless outside of the context of the bits used to represent it.

Conversely, if you care more about the value of the number than how it is stored, you would use the multiplication syntax. Also, you would not need the bit-shifts'' powers-of-2 limitation.

Limitations serve to describe and enhance, not to hinder.

Also, what about using variables that fit nicely into registers? Should we try to detect the processor''s registers and use the correct variables? That''s why we have int! Guaranteed to be a good performer, but we must make sure we don''t store more data in it than we can depend on according to the standard. The compiler wins again!


quote:
Original post by Kylotan

You mentioned that good code is cleaner and faster and more easily read than bad code: without an optimising compiler available, I assert that the above example breaks that rule. Choose speed, or choose readability.



(see above) You just don''t know which is faster. So the only thing you can possibly base it on is readability. You must rely on the compiler to do it for you, or you have as much chance of losing performance as you do of gaining it.


quote:
Original post by Kylotan

I will also say that I don''t believe logic is the only thing worth aspiring to. If you have perfect logic, you have mastered your art, but you cannot see beyond it. Historically, advancements have been held back because people who believed in ''conventional wisdom'' would suppress or ridicule anyone thinking too differently. Perhaps there is a ''better logic'' beyond the bounds of the currently held logic system? Who knows?



Certainly logic is never an ultimate end. Instead, it is part of the means to every end.

(about the conventional wisdom part) You are confusing logic with the presuppositions. Their presuppositions were incorrect, or they used incorrect logic to derive new presuppositions from the old ones. At root, the problem was not correct logic but a lack of it.




- null_pointer
Sabre Multimedia

Share this post


Link to post
Share on other sites
I wrote an article for GameDev on exceptions if you want to read that and see if it gives any you any good information.

I try to remember a few key things when designing with exceptions:

Exceptions are meant as a form of non-local error handling ( e.g. errors that must go outside the current component or sometimes class )

Exceptions help protect code from clients who don''t check return values.

Exceptions can cause trauma with state machine systems. Your exception philosophy must be thought of in the context of state machine control lest many maintainence nightmares occur.

Low level real-time systems are usually better off being exception-less - letting the context of their use decide if exceptions should occur. It depends on the level of organization a library has if you consider it low-level or not. Although this puts us back into the ''check error return values'' there are tricks around that. Such as a lightweight error class whose dtor checks to see if the error has been checked.

Multiple catch statements can sometimes mean a form of non-local messaging is taking place between two layers of the system. Sometimes this is necessary, but if you design to avoid it, you''ll probably be happier with maintaining the system.

Share this post


Link to post
Share on other sites
I''m gonna keep this short I know we will never agree on everything, but that''s ok

quote:

Why should I cringe? I would cringe at the first code snippet, but not this. If the file didn''t load correctly, it''s an exception. Does depend on the circumstances though.

Well technically, none of the criteria specified were amazingly exceptional. You could argue that a dedicated file-reading function should consider errors in files to be part of normal operation. I would say it is normal to have to deal with the wrong type of file or corrupt files. But in this case, exception handling was the cleanest way of dealing with it. Yes, I still passed a return value, but what exception handling lets me do is have a single exit point from a function, and catch several different types of errors, report which type happened, clean up any number of resources, and return a simple true or false, which may be a good enough solution for the calling function. I wouldn''t throw an exception down as the function failing to load a file is not exceptional

I guess you could say that I see nothing wrong with ''abusing'' exceptions providing that abuse is encapsulated within my own code where I know all the parameters. I wouldn''t expect a client of any library I made to have to agree with my interpretation.

quote:
Incidentally, that code is almost identical to code from my library, except that I''d use the type instead of a string mimic-ing the type.


I find RTTI to be a somewhat ugly way of handling something which, in this case, can be handled with a string

quote:

You are missing something in your model though. Hammers are slightly easier to use for hammering nails. So, if you are using a screwdriver to hammer nails, then there must not be a hammer available. However, with C++ there is a plethora of tools, and I seriously believe that making one tool into a one-size-fits-all because of personal preference will hurt your code.


If I am very skilled with using a screwdriver as a hammer, that may be more effective for me than if I was to try and use a hammer, which I am not skilled at using It''s possible that I could perhaps learn to use the new tool and become more effective with it, but (a) that''s assuming the new tool is better - which is not always possible to gauge, and (b) I have to invest time in the learning which I could be investing in using my old tool.

quote:
You''re getting offended now. It''s still silly to think that the people who came up with the language would not know as much about exceptions as you do. They are not ruling the world like a pack of dictators; they created a language. You must learn to work with the language''s system of coding and not against it. Stop re-inventing the wheel and concentrate on writing the program.


Nah, sorry if it seemed I was offended, I was not. I never claimed that I knew ''better'' than the writers of the language. On the other hand, they don''t know my program. And to assume that a language is perfect as-is is not right, in my opinion. And again, I assert that Stroustrup himself has often been a leading proponent of getting code that does the job rather than finding the One True Way . That is why he wanted a hybrid language rather than a ''pure'' one like Smalltalk - there is nothing wrong with finding 2 ways to achieve 1 goal.

quote:

Technically, there could be systems on which bit-shifts are slower. You just don''t know. The compiler should optimize for the specific platform.


I''m not aware that the C++ language standard calls for compilers to optimize? If I know the platform I am writing for, I know which will be the quickest code. It is illogical to code for nonexistent platforms

quote:
It should be written in the way that fits its purpose.


The purpose is to divide by 4.
The wider purpose, in the context it would be in, is also to perform that operation as fast as is possible.

Given a knowledge of the platform, and the fact that the compiler is an unknown quantity, I would sometimes be tempted to ''do it myself'' rather than trust the compiler.

quote:

Also, what about using variables that fit nicely into registers? Should we try to detect the processor''s registers and use the correct variables? That''s why we have int! Guaranteed to be a good performer, but we must make sure we don''t store more data in it than we can depend on according to the standard. The compiler wins again!


Yep, here the language fulfills my needs. I use int unless I have a specific need for something else (ie. save space). But the language doesn''t always make sufficient provision for optimisation in other ways, so occasionally it might be necessary to push the boundaries of what the language intends in order to better achieve what the programmer intends.

All the best

Share this post


Link to post
Share on other sites
Kylotan: If you are using exceptions in the way that they were intended to be used, then you are correct in saying that there is not a better way. If you are using them against their definition, then there is OBVIOUSLY a better method somewhere else.

About the idea with the screwdriver and the hammer -- you're only limiting yourself. Obviously, the hammer, being designed for hammering nails, is a better (more efficient, easier to use) tool for the job. It's just plain logic. Let's put forth an example. We need to hammer some nails into a two boards, to put them together to make a beam. Let's say that the hammer has an efficiency, ease-of-use ability of 10 for hammering nails, and the screwdriver has the same type of number, with a value of 7, for hammering nails. Set our skill levels equal 15, mine for the hammer and yours for the screwdriver. Now, my total combined skill (efficiency) will be 25 for hammering nails, and yours will be 22 for the same job. Therefore, you get more out of the tool for less time when you use the tool designed specifically for the task at hand. I get an automatic bonus of 12%. Or it would be more logical to say that you would have an automatic deficiency of about 14%.

If you put up a skewed example, with the same ratings for the tools but my efficiency is 15 and yours is 25, then you would have a clear advantage. But the range of your advantage is limited by the ratio of the efficiency of my tool to your tool for hammering. It would be in your best interest in the long run to learn how to hammer nails with a hammer. Further, because the hammer is designed specifically for that task, it would be easier for you to obtain a skill level of 25 with the hammer than the same skill level with the screwdriver, for hammering nails. Every workman knows the value of his tools. A good artisan requires both skill and the proper tools.

I have not talked about the short run in the product deadline. That has nothing to do with the ideal circumstances we were just considering. We were talking about which would be better in the long run. I don't have a deadline per se, I want to make it the best there is and I will not accept limitations imposed upon me by extraordinary circumstances. In other words, if it's not a good product, don't bother selling it.

Sure, 10 minutes in the docs is 100% more time than 5 minutes, but the difference is just not that much. You get out of programming what you took the time to learn. If someone can show me how to do something better, I will adjust (I may get irritable for 5 minutes first ).

I still don't understand why any app in a multi-tasking, resource-sharing OS, would "reserve" resources. That makes them unavailable to the OS and to other apps in the mean time. In a multi-tasking OS, you do NOT hog the computer. You tell the OS when you need it, and release it when you are done. That may be half-way sensible when your app is the only one running, but when other apps are limited by every resource you "reserve"... I cannot logically come up with a reason as to why you would need to reserve memory at the beginning of the program so that you could release it when you run out. Perhaps you could list several instances where that might be plausible?


quote:
Original post by Kylotan

And again, I assert that Stroustrup himself has often been a leading proponent of getting code that does the job rather than finding the One True Way



How can you say that? If you apply the laws of probability to the number of variables that must be considered with a particular coding problem, it is VERY improbable that you will find even two solutions that provide equal performance, readability, etc. If 2 methods are not exactly equal, then one method must be better. Try to find a problem where at least 2 solutions provide equally suitable results for the 4 criteria I listed eariler. I never said not to use certain tools at all; I said that you pick the tools required by the problem at hand.

If I was concerned with just "getting code that does the job," I wouldn't have started this project. I consider it more intelligent to learn how the language was meant to be used than to mold it to my misconceptions. Everyone has misconceptions, and they hinder thought because you cannot logically arrive at facts through incorrect presuppositions.

(BTW, the people who write the languages consider MANY more cases than you think. They do NOT just make a language without thinking about how it will be used. Furthermore, languages are only subobjects that you select and use in a certain way to make programs. Using the wrong objects for a particular task can only be less-efficient. Also, I 100% absolutely doubt that you could come up with a program that the language-makers did not consider.)


quote:
Original post by Kylotan

I wouldn't throw an exception down as the function failing to load a file is not exceptional



It may or may not be exceptional. (Were you making a pun?) Anyway, it would be an error. An exception is merely one type of error. Exceptions are the unpredictable kind, errors may or may not be predictable. To show you how you do this in real life, I will show you a simple example. You are in one room in the house and you wish to go to the next room. You see that the only feasible way is through the door. Unless you are really, really stupid, you would not try to walk through the door and throw a few nasty exceptions in the air if you found out it was closed. You would, logically, check to see if the door is open. If it is, then you are assured of safe travel into the other room. If the door is closed, then you can now handle the situation (with a lot less trouble) by opening the door and proceding through. Your nose thanks you, and the world is safe from sheer idiocy.

I don't understand how this is so hard to grasp with programming! It's rather obvious that some errors are easily predictable, and some are not. I told you, the exact circumstances of the problem tell you how it should be handled. It's not smart to use exceptions for every error, just as it isn't smart to use non-const numbers when you obviously are not going to modify them, and it isn't smart to allocate everything on the heap when it is plausible to do it on the stack, ad infinitum... If you wish to debate this, give me some examples! Show me where it is smart to handle ALL errors after you have messed up! Show me the problems that are NOT handled CORRECTLY by my theory!

If the logic is correct, go after the presuppositions...


SteveC: I read your article and replied to it a while ago -- dunno if you remember me but I was the nut with the long emails.

I do disagree on a few points though. Not all errors are exceptions. I'm not pulling this from my head but from 2 minutes with a dictionary and some common sense. Errors come in two flavors: predictable and non-predictable. If the error is predictable, you handle it BEFORE it happens. If the error is unpredictable, you handle it AFTER it happens. It is (IMHO) stupid to handle something predictable after it has occurred. If it is predictable and logical that something external may affect your program, you should NOT throw an exception when the operation doesn't work. It should be checked for validity before the operation.


quote:
Original post by SteveC

Exceptions are meant as a form of non-local error handling ( e.g. errors that must go outside the current component or sometimes class )



Exceptions are not necessarily errors. You must mean, "Exceptions are meant as a form of non-local 'exception' handling." Also, I don't see why exceptions are limited to non-local, but they do work nicely for that purpose. Moreover, they seem to work just as well within the same function (talking about cleanup). I do not understand why exceptions are meant as non-local, because if they were they should be limited to non-local. It would, perhaps, be more accurate to say that they were merely meant to be used either way.


quote:
Original post by SteveC

Exceptions help protect code from clients who don't check return values.



Bingo! But, should they also enforce proper usage of the class? (like requiring that the file is open before data is written to it. Or is that just another way of saying the same thing?)


quote:
Original post by SteveC

Exceptions can cause trauma with state machine systems. Your exception philosophy must be thought of in the context of state machine control lest many maintainence nightmares occur.



Every class has several states, doesn't it? Why should state machines be any different? If they are encapsulated into a class, they should be protected by only being able to be changed at a high level. The states are built out of subobjects, which should be as independent of each other as possible. If you've modeled your classes correctly, then there shouldn't be a problem; however, state machines do increase the effect of incorrectly modeled classes so I see your point.


Argh! I'm getting tired of this now...I can't change your minds through argument so I will consider my questions answered.




- null_pointer
Sabre Multimedia


Edited by - null_pointer on 5/2/00 8:19:32 AM

Share this post


Link to post
Share on other sites
quote:
Original post by null_pointer

Kylotan: If you are using exceptions in the way that they were intended to be used, then you are correct in saying that there is not a better way. If you are using them against their definition, then there is OBVIOUSLY a better method somewhere else.


Surely that assumes the language is perfect and has a perfect tool for every task? I don''t believe it is perfect, and therefore it is possible to find solutions which go against the standard definition without there necessarily being a better way of doing it.

quote:

I still don''t understand why any app in a multi-tasking, resource-sharing OS, would "reserve" resources. That makes them unavailable to the OS and to other apps in the mean time. In a multi-tasking OS, you do NOT hog the computer.


In a system where it is possible to allocate X resources to your program, allocating X+1 is not really a big deal.

An example I have worked with: a MUD server can end up terminating for a variety of reasons. In these circumstances, it is imperative that the program leaves a log file to say why it was terminated. If the operating system has no free file handles, then it would be impossible to write this log. Therefore the program takes a file handle when the program starts to ensure that it will have one at the end when it needs it. The correct functionality for the program demands that it has a file handle at the program''s termination, and it therefore should not run unless it can guarantee that state.

quote:
I cannot logically come up with a reason as to why you would need to reserve memory at the beginning of the program so that you could release it when you run out.


Any kind of ''graceful shutdown''. The users of your program may not care so much about your program consuming extra resources, however it may be important to them that it closes down in a stable state. Closing down gracefully may require X kb of memory, and therefore unless you reserve that at the start, you will never be able to shutdown gracefully following an out-of-memory exception.

quote:
If 2 methods are not exactly equal, then one method must be better. Try to find a problem where at least 2 solutions provide equally suitable results for the 4 criteria I listed eariler.


I doubt there would be 2 ways that suit all 4. But I don''t believe that there is always at least 1 way that suits all 4 either. If one way is very readable and the other way is very efficient, then there is no objectively ''better'' way. Readability is subjective to the person doing the reading, therefore it is always partly down to user preference.

quote:
I consider it more intelligent to learn how the language was meant to be used than to mold it to my misconceptions.


Sure, I am not saying to be ignorant of the design considerations behind the language. Just saying that sticking to someone else''s goals is not always a perfect solution. Nor should you miss out on, for example, automated calling of destructors and centralised error reporting code simply because the if check you made constitutes a check for a ''predictable'' situation rather than an ''unpredictable'' one.

It''s certainly an added achievement to learn and observe the original intent of the language creators. But I think it is a failing to stay within their bounds on the assumption that they thought of and allowed for everything with a perfect solution. I wish I could dig out the quote, but I can''t: however, I am certain that Stroustrup himself has stated that there is often more than 1 way to do something ''right'' in C++, partly because the language is not ''pure''. Back in 1986, he considered exception handling, along with templates, both unnecessary for C++! Now they are added: as extra tools, not necessarily to replace older ones, but to augment them.

My previous example with the file opening: it is not unpredictable that at least 1 of the tests would fail. Therefore this should not be a candidate for exception handing, right? But I find exception handling is the most effective way to keep this snippet of code clean and ensure my resources are deallocated. If you can point out a better way, that won''t compromise my code speed or readability, please share. I am as willing to learn new tricks and techniques as anyone.

quote:

It may or may not be exceptional. (Were you making a pun?) Anyway, it would be an error. An exception is merely one type of error. Exceptions are the unpredictable kind, errors may or may not be predictable.


Yes, but by definition, if it was totally unpredictable, you could never throw it, as you would never be able to put in the if check in the first place in order to see if such a situation had arisen! You could perhaps divide the 2 cases into "reasonable to expect" and "unreasonable to expect" but then you are edging towards subjectivity again... which is my point.

quote:

I don''t understand how this is so hard to grasp with programming! It''s rather obvious that some errors are easily predictable, and some are not.


Well, I grasp this, I just disagree with the idea that if something is easily predictable that exception handling becomes ''wrong''. I believe there is a sliding scale from ''most predictable'' to ''least predictable''.

quote:
If it is predictable and logical that something external may affect your program, you should NOT throw an exception when the operation doesn''t work. It should be checked for validity before the operation.

If I do this:
Type* ptr = new Type;
and there isn''t enough memory, it should throw std::bad_alloc, right? As far as I know, there is no way to check for this lack of memory (something external) before it throws the exception. In this case, it''s predictable and logical that such a thing may happen (computers have finite RAM), but I can''t check it in advance. In effect, the exception is my return value.

quote:

Argh! I''m getting tired of this now...I can''t change your minds through argument so I will consider my questions answered.


Sorry to annoy you. Not my intention. I just do not agree with your philosophy and probably never will

Share this post


Link to post
Share on other sites
Kylotan: You are confusing the amount of variables with subjectivity. You can have 1000 variables in an algebra problem, but it''s still logic that solves it. You could argue that there are many different ways to solve that problem; just the amount of variables tells you that. However, there will only ever be one best way to solve it (the way requiring less work and thus less chance for error is the best way, whatever it may be in this problem). Programming is one complex solution made out of many simple solutions, just like that algebra problem.

Just because something is rated on a scale does not mean it is subjective; if that was true we wouldn''t have measurements.


quote:
Original post by Kylotan

quote:
--------------------------------------------------------------------------------
Original post by null_pointer

Kylotan: If you are using exceptions in the way that they were intended to be used, then you are correct in saying that there is not a better way. If you are using them against their definition, then there is OBVIOUSLY a better method somewhere else.
--------------------------------------------------------------------------------


Surely that assumes the language is perfect and has a perfect tool for every task? I don''t believe it is perfect, and therefore it is possible to find solutions which go against the standard definition without there necessarily being a better way of doing it.



Hmm...no, but you are counting on the programmer being perfect, too, because he must compare the language to his own logic to tell which is better. In other words, the moment you start to evaluate the language, you make it subjective. You are subjectifying it. (yes, I am holding a dictionary )

I would say from experience that 9 out of 10 problems, bugs, headaches, etc. wouldn''t exist if programmers would just use the language the way it was intended. The only thing you have provided with exceptions was a workaround for type-checking. I will go over exactly what it accomplishes and what it does not -- the pros and the cons -- as well as ease-of-use. We will see which method is better, objectively.


Your method
~~~~~~~~~~~

Using one global class to hold all types of exception information. Using strings to hold the type of the class, and also the description of the exception.

Pros:

    1) Allows user to handle all exceptions with just one catch statement.


Cons:

    1) catch() statement type-checking can no longer be used.
    2) User must catch and handle all errors at the same place.
    3) Even though he has caught a generic exception, he must still switch on the type, which is no better than the type-checking that was eliminated.
    4) The syntax checking of the compiler was disabled. You cannot easily check for wrong exception names, and the code may or may not run as planned.


Example:


void MyApp::LoadMRUFile(int nPosition)
{
if( nPosition > nMaxPosition )
throw exception("MyApp::LoadMRUFile", "nPosition was too large");

// save the current document
document old_document(current_document);

// continue loading
try
{
file mru_file(GetMRUFileName(nPosition));

mru_file.open();
mru_file.read((void*) ¤t_document, sizeof(Document));
mru_file.close();

// make sure it''s valid
if( !current_document.check() )
throw exception("MyApp::LoadMRUFile", "document loaded from disk was invalid");
} // end try

catch(exception e)
{
// we still need to switch type, because one of the
// functions we''ve called might have thrown another
// type of exception

switch(e.type)
{
case "MyApp::LoadMRUFile":
{
// do something -- possibly restore old_document
if( old_document != current_document )
current_document = old_document;

// re-throw the error, or else user will have
// no clue as to what happened
throw;
} break;
}
} // end catch
}



Explanation:

When using that particular "method" of exception-handling, you must always switch on the type to make sure you are handling local exceptions. If you wish to handle non-local exceptions (i.e., from functions you have called), then you must depend on what code will call which exceptions. Unfortunately, you have destroyed the throw() syntax that comes after the function declaration, so you have no way of knowing what functions throw what exceptions without looking at the source for the class. The user of your class has to go through your source, too. Also, you place the exception types in the .cpp files, mixed among the code, when they could be in the .h files at the beginning of the class. This method has nothing to do with modularity, as it creates dependencies all across the board.

Note also the disorganized mess in the lone catch statement: is it worth having only one catch handler when the catch block contains all the code of the smaller catch blocks? You are not eliminating code by using one catch statement; you are just combining small catch statements, and accomplishing the same thing.

Worse yet, what if you use the same description in a different place, but don''t spell it correctly? Or perhaps the wording changes slightly? You could get around this with const char[]''s defined in the header, but that''s just working around the language again.

In short, it accomplishes nothing desirable that my method does not accomplish, and imposes extra limitations (which are unnecessary). It also imposes extra overhead.

You will find that every time you work around a language feature.



My Method
~~~~~~~~~

Use one global class, and derive all class exceptions from it. Every class will have a generic exception type, from which the class-specific exception types are derived. Uses empty derived classes as simple type-checking.

Pros:

    1) Uses the languages type-checking to determine the type of the exception.
    2) Allows user to handle all exceptions with just one catch statement, if desired.
    3) Allows user to handle class-specific or all class generic exceptions, or any mixture of both types, if desired.


Cons:

    (none)


Example:


// classes that are declared in the headers:
// ::exception
//
// file::exception - derived from ::exception
// file::end_of_file - derived from file::exception
//
// MyApp::exception - derived from ::exception
// MyApp::invalid_mru_position - derived from MyApp::exception
//
// document::exception - derived from ::exception
// document::invalid - derived from document::exception

// preceding stuff wouldn''t be duplicated with each function
// it''s just here for reference.

void MyApp::LoadMRUFile(int nPosition)
{
if( nPosition > nMaxPosition )
throw MyApp::invalid_mru_position;

// save the current document
document old_document(current_document);

try
{
file mru_file(GetMRUFileName(nPosition));

mru_file.open();
mru_file.read((void*) ¤t_document, sizeof(document));
mru_file.close();

// make sure it''s valid -- will throw document::invalid
// if not valid.
current_document.check();
} // end try

catch(document::invalid)
{
// restore old document
if( current_document != old_document )
current_document = old_document;

// re-throw, possibly tell the user of the corrupted file
throw; // (goes to catch(::exception) before leaving the function)
} // end catch(document::invalid)

catch(::exception)
{
// we don''t know what happened and we didn''t plan it,
// but we do have a chance to clean up if we need it.

// re-throw the error
throw;
} // end catch(::exception)
}



Explanation:

With this type, we let the compiler juggle the "type strings" and the "description strings," as type-checking was the way exceptions were made to work. Syntax errors are caught by the compiler, and all exceptions are declared in their respective headers. Later on, when it comes time to add the throw(/* exception-type */) to the function declaration (whenever VC 7 comes out!), the user now knows exactly which exceptions can occur in any given function (and also any functions called by the given function).

I am using multiple catch() statements, because each requires different handling. If you find you are duplicating code, re-throw the exception and place a generic handler as shown in the above example to handle the cleanup code. The cleanup code is the same for each exception in the given function, because each function cleans up after itself. Exception-handling does not need to depend on how far we got in the function when the exception was raised.

This is more akin to modularity, at least much more than your string-exceptions.

On a side note (and not to be confused with the main point), because I am writing a library, I have a set of classes that are made to function with each other. If you really wanted a correct model of the classes (file, document, MyApp), you would have made subobjects where dependencies were evident, but I didn''t include them to keep from distracting you from the main point. Further, you can derive from the ::exception class if you want to make ::out_of_memory or whatever you like, and it won''t conflict with the existing code in the least. I am considering adding both types (class-specific and also ::out_of_memory, etc.) to my library.

My method keeps all the pros of the language, and none of the cons (there just aren''t any).


On general principle, I have learned that a sure-fire check to see whether you are using the language incorrectly is if you are: 1) duplicating code, 2) losing type-checking. 1 and 2 seem to occur in every piece of bad code (bad code, meaning: not using the language correctly) that I have examined.


Back to the point, though. You now understand how my method works. To further explain it, the only time RTTI is logically necessary is in the catch(::exception) in main(), which only overrides the "unhandled exception error" to provide more reasonable output for my library. Since everything starts from main(), everything must go back to main(), and that is an excellent place to catch-all.


quote:
Original post by Kylotan

Surely that assumes the language is perfect and has a perfect tool for every task? I don''t believe it is perfect, and therefore it is possible to find solutions which go against the standard definition without there necessarily being a better way of doing it.



No, not necessarily. Far more often the programmer is causing the trouble.

The language is extremely simple to use, if you know it. Most programmers just don''t know how to use it correctly. They waste unheard-of amounts of time working around language features, then posting to message boards because they can''t figure out what went wrong. And here''s the worst part: they don''t realize they dug the hole! In other words, the time they spend working around language features could have been spent learning how to use them, with a lot of time to spare towards solving the current problem.

In other words, I would estimate that: the case of not using the language correctly exists 99% of the time, and the case of trying to do something not covered in the language exists 1% or less of the time. Although many cases appear to be of the second type, they are really of the first type through the ignorance of the programmer.

I absolutely detest the "different methods are equally good" way of thinking. If there exists no feature that does what you want, then make your own. However, it is FAR MORE COMMON that what you wish to do is incorrect. The language is far from being a generic, hulking beast that you must trim down and then "season to taste." There can only be one best solution to any problem, because the word "best" means one. You can have two "good" solutions. But you can NEVER have two "best" solutions. You must decide which you will believe: 1) a "best" solution exists for every problem, or 2) several "good" solutions exists, but not a "best" solution, because of individual differences. 2 is asinine considering the amount of relevant circumstances for every individual problem, and also the obviously imperfect nature of human thought. If you are going to use the word "best" then use it correctly.

Perhaps you are thinking I meant: "there is one best way to solve every problem"? This is what I truly mean: "For each problem, there exists a best way." Understand the difference? We don''t use one way to solve every problem; however each problem has its own best way. Meaning, templates might be the best way to solve one problem, but simultaneously inheritance might be the best way to solve a different problem.

If you are using the language (referring to C++ now) correctly, for a task that it was meant to do, the code will be simpler, cleaner, more efficient, and more vulnerable to optimization than anything you could possibly write yourself. Until the language feature is added or improved, you are only emulating something. Understand? If you want to run 32-bit code on a 16-bit processor, you''re going to have to write an emulator. If you are trying to do something outside the confines of "the box" from inside, you have, at best, a flawed implementation. Flawed, here, means slow, unreadable, illogical, or a combination thereof.

(On a side note, those are all attributes of the same object, bad_code, so they are usually found together.)

What I am saying long-windedly is that while the language is imperfect, it is much closer to being perfect than you think. It is wrong for a programmer to think that the language is imperfect until he has mastered it. Otherwise you will fall into the trap of, in order of severity: 1) bashing the language, 2) working around the language features, and 3) attempting to write your own language "within the box" by trying to introduce things that are outside the box.


I will now deal with the specific cases.


quote:
Original post by Kylotan

quote:
--------------------------------------------------------------------------------

I still don''t understand why any app in a multi-tasking, resource-sharing OS, would "reserve" resources. That makes them unavailable to the OS and to other apps in the mean time. In a multi-tasking OS, you do NOT hog the computer.
--------------------------------------------------------------------------------


In a system where it is possible to allocate X resources to your program, allocating X+1 is not really a big deal.

An example I have worked with: a MUD server can end up terminating for a variety of reasons. In these circumstances, it is imperative that the program leaves a log file to say why it was terminated. If the operating system has no free file handles, then it would be impossible to write this log. Therefore the program takes a file handle when the program starts to ensure that it will have one at the end when it needs it. The correct functionality for the program demands that it has a file handle at the program''s termination, and it therefore should not run unless it can guarantee that state.


quote:
--------------------------------------------------------------------------------
I cannot logically come up with a reason as to why you would need to reserve memory at the beginning of the program so that you could release it when you run out.
--------------------------------------------------------------------------------


Any kind of ''graceful shutdown''. The users of your program may not care so much about your program consuming extra resources, however it may be important to them that it closes down in a stable state. Closing down gracefully may require X kb of memory, and therefore unless you reserve that at the start, you will never be able to shutdown gracefully following an out-of-memory exception.



Well, first, if you are using exceptions when shutting down, your error logging code would be further from the file handle and memory releasing code, and thus the file handles and memory would be released first. That is the correct way to handle the problem. You see, running out of resources was predictable, but the correct way of solving it relied on the nature of exceptions being thrown instead of the work-around of reserving resources or whatever. You over-corrected.

You could argue that the OS might not be in a state where it can juggle file handles, but in that case I doubt it could write information to a log file either. "Reserving resources" has no purpose in a multi-tasking OS, even if it is "just a file handle." Every resource counts, and in an environment where there are typically 100''s of modules loaded they add up fast. Never depend on the user having more resources than you will need during run-time. Also, when shutting down, you will be releasing the resources, then writing to a log file or displaying information to the user. So, logically, you WILL have resources to use for logging at shutdown, providing of course that you have used them before.

Of course, you could say "what if another app took the resources as I was releasing them?" but that is just your bad luck. Suppose the other app needs them just as much, or perhaps more? You cannot know, unless you are intent on taking over the OS and using it as a single-tasking OS like DOS. (I would also like to point out that most OSes have "fail-safe" methods in their APIs for precisely this purpose. Look up the flags that go with MessageBox in Win32 for an example.)

It is the OS''s job to coordinate the resource usage of other apps with your own; it is not your job. Whenever you take that job upon yourself, you take away from the efficiency of that job. You can try to ignore that principle by cutting down the resources you "need" to "reserve," but you will never get rid of the principle. Principles do not change with the varying degrees of their circumstances. Misconceptions do.


quote:
Original post by Kylotan

quote:
--------------------------------------------------------------------------------
If it is predictable and logical that something external may affect your program, you should NOT throw an exception when the operation doesn''t work. It should be checked for validity before the operation.
--------------------------------------------------------------------------------


If I do this:
Type* ptr = new Type;
and there isn''t enough memory, it should throw std::bad_alloc, right? As far as I know, there is no way to check for this lack of memory (something external) before it throws the exception. In this case, it''s predictable and logical that such a thing may happen (computers have finite RAM), but I can''t check it in advance. In effect, the exception is my return value.



Technically, it isn''t your return value, but I understand what you are trying to say there. However, what is your point? It is the job of the OS to make sure that memory is available. It is not your job.


HOW THE LANGUAGE SEES IT

If you look at this as if you were designing the new() operator, this logic is correct. Exceptions enforce correct coding. If the new() operator had not thrown an exception, you would have tried to use a NULL pointer and been met with an odd error that doesn''t tell you the real problem. Since new() throws bad_alloc, you must handle it. Indeed, you cannot proceed along the same program flow path without handling it.


HOW THE OS SEES IT

Why does my logic not make sense here? Where is the conflict? Responsibility is the "hidden" keyword. It is what is meant by "logically expected to handle it." Responsibility, in this case depends on whether the user should see the memory exception and how it should be presented. The OS juggles resources for the entire system, not your app. If the OS needs memory, say, for a new program the user just started up, what can it do? It cannot arbitrarily pick an app and shut it down. It cannot send messages to apps in the system to condense their memory. What must the OS do? It must tell the user to close some programs and try again. It''s the only feasible way.

But what does it do with the programs? Can it pause the program trying to start up until memory is free? What if other apps are calling new and delete too? If the OS paused all apps calling new() when it was out of memory, then it would either lock them up (you wouldn''t be able to close them if they''re paused), or slow down the system tremendously (plus all the virtual memory access time you are probably already using). The memory allocation call should just fail, for the app, and should require the program flow to be changed (now we''re into language territory again -- see above).

If you leave out responsibility, exceptions will look a little lopsided. That is why the phrase "logically expected to handle it" occurs in several forms in many places on this page. Responsibility is not subjective. Thinking that responsibility is subjective is only a method of escaping responsibility.


Let''s look at an example. 4 people are standing on the corners of a square, looking at each other (that is, facing towards the center). In the center of that square is a cube. The cube is painted with 4 colors, and, for each side, only one color is used. Let''s call them red, blue, green, and yellow. To me, because I can see only one side of the cube, the cube looks red. To you, it looks blue. To the other people, it looks either green or yellow. But the cube is not one solid color; it is 4 different colors.

Human perception is like that. We can only see one viewpoint at a time. But the cube exists, and appears different from many angles. Logic can take the different views of the cube and "re-build" the cube from those separate parts. You might say, then that logic is the method by which we converse. However, before you can re-build the cube, you must have a "point of reference." In other words, to tell what the cube looks like, we needed to know the relative positions of the people facing the cube. We need to know how the viewpoints relate to each other.

Perhaps it is wrong to say that "people are illogical." But they do use logic incorrectly. That is why, taking the position and colors of the sides of that cube, it is possible that we each arrive at a different cube. However, the cube really only ever exists in one form. Our perceived cubes are, in fact, only imaginary. If our perceptions of the whole cube are different from each other, then either one or both of the perceptions must be incorrect. (Logical rule: things equal to the same thing are equal to each other; and, also, things not equal to each other cannot all be equal to the same thing.)

The relative would have no purpose without the absolute.

In other words, logic is the method of taking several subjective viewpoints and discovering the objective. If people couldn''t do this, they would never be able to communicate because all things come through a viewpoint before your brain can process them. If you ever wish to evaluate anything in an objective way, you must use logic.

Think about how people play 3D games for a moment. If you simply move towards an object, you don''t have much idea how the room is laid out, but if you move in all directions (sideways, up-down, rotating) you can easily tell the dimensions of everything in the room, and it''s easy to go through it quickly in a death-match. That''s why 3D games with polygons are harder than 3D games with sprites if you don''t know how to move in several ways at the same time. Try limiting yourself to one method of movement and see how it affects your perception.

But you must never make the mistake of believing that your perception is the reality. Reality cannot be viewed objectively, from all angles at once, by a human. That is why logic is so necessary; it allows us to view all angles of reality. Not at once, but we can grasp it by "interpolating" the data from several different perceptions.



I would ask something of you before you reply, though: find me a programming task for which there is no C++ language feature designed. If you have trouble coming up with one, then my point is sound.

If I may, I''d like to ask another question: if there is no absolute for when and how to use exceptions in a given problem, how can they work? Exceptions exist to provide (and enforce) proper communication between programmers'' code. There would be no purpose for exceptions if people couldn''t communicate. There is no communication without a standard.

And another question yet. If you were to say, "I will use the language features as they suit my logic," then you will end up re-writing them to suit your logic (look at the type-checking example above -- bet you had no idea the "built-in" type-checking was that powerful). You must either re-write your logic or the language''s logic, but they cannot function together as-is. Which would you adjust?



So, the debate is not on whether exceptions require a standard but on what the standard should be.




- null_pointer
Sabre Multimedia

Share this post


Link to post
Share on other sites
Ok, keeping this short:
Your example regarding the typechecking is very valid. I never intended to recreate typechecking with string comparisons. Sorry if I gave that impression. However, (and I believe you will agree,) 99% of the time, an exception means Game Over. Therefore throwing the exception is done to (a) free resources, (b) impart information to the user/programmer. The resources part is already covered, as destructors will be called no matter whether you throw some complex exception or an int. But the information part could come in more than 1 form. For -simple- situations, where I just need to know what went wrong, and I will handle it locally, I may as well throw a string. I know that I will only be catching 2 different types of exceptions here: a string, or Something Else This doesn''t stop me using an exception hierarchy for other things, but perhaps this part of the code doesn''t need that level of sophistication. And that is the type of situation that I am talking about.

Of course there can never be 2 ''best'' situations. But I do think there can be 2 equally good ones in a case where there are pros and cons of both sides. You obviously disagree. Perhaps I could reword my assertion in a way that makes sense to you: 2 programmers, given exactly the same problem, may find 2 different solutions to a given problem, and be able to state that the other''s solution is inferior to them. The main criteria I draw upon here is readability. Readability, by nature is subjective: it involves a totally analogue human being reading something If you want to argue that readability should be some sort of standard, then do so. However, those 2 given programmers writing code for themselves will probably have 2 different ideas of what is readable to them.

I will continue to disagree about the responsibilities of a program running on a multitasking OS. The OS means nothing unless the software running on it works. If the client''s definition of ''does it work'' means ''can it be guaranteed to write a log file when it shuts down'' then the program should do that. For an average windows app I would agree, reserving resources is not a good idea, however on critical servers you cannot necessarily afford the luxury of hoping the OS has a resource when you finally get around to needing it. This should be at the discretion of the person choosing to run the software. It would not be unreasonable for the program to refuse to start up unless it can guarantee such a resource. Reserving resources also protects you from potential errors in another program that may accidentally consume them for no reason. Making such reservations is not an uncommon practice, either. Many games allocate themselves a large chunk of memory and then divide it up as they see fit later. And I''m sure they don''t use 100% of it all the time. The same goes for storing graphics in video memory... and what about the idea of taking Exclusive mode under DirectX? Is that wrong too?

quote:

In other words, logic is the method of taking several subjective viewpoints and discovering the objective. If people couldn''t do this, they would never be able to communicate because all things come through a viewpoint before your brain can process them. If you ever wish to evaluate anything in an objective way, you must use logic.


I am a hearty believer in logic, trust me. But I contend that we don''t always need to evaluate everything in an objective way, nor would we want to.

quote:

I would ask something of you before you reply, though: find me a programming task for which there is no C++ language feature designed. If you have trouble coming up with one, then my point is sound.

Oh, you''re just trying to prevent me from replying. No such luck

I think this is too vague to answer. I believe there are several tasks in other languages that C++ has -trouble- with, but since C++ is very close to the machine code if you want it to be, you can pretty much do anything that is possible on a computer with C++. But it doesn''t mean there is a specific design for everything.

Sadly, I am not an ''expert'' in any other language. I am not really an expert in C++ (as you would no doubt confirm ) either. But I expect someone familiar with very different languages could point out a procedure or algorithm which is very awkward in C++ and straightforward in a different language.

quote:

If I may, I''d like to ask another question: if there is no absolute for when and how to use exceptions in a given problem, how can they work? Exceptions exist to provide (and enforce) proper communication between programmers'' code. There would be no purpose for exceptions if people couldn''t communicate. There is no communication without a standard.


And if I am communicating with myself, I shall use whatever language suits me best. Do you always write fully-formed sentences when you leave yourself notes? And what about your shopping list? If you do, then more power to you, but you are the exception in taking such an unnecessary step. It''s when we deal with others that we need to worry about standards.

(However, I will also take this opportunity to emphasise that communication, or more specifically language, is not just about relaying information: it also fulfills psychological and social needs. Maybe you will reject these needs as being illogical. But they are there nonetheless.)

quote:
And another question yet. If you were to say, "I will use the language features as they suit my logic," then you will end up re-writing them to suit your logic (look at the type-checking example above -- bet you had no idea the "built-in" type-checking was that powerful). You must either re-write your logic or the language''s logic, but they cannot function together as-is. Which would you adjust?

I fully appreciate the power of type-checking, thank you very much. As I pointed out above, I never intended my use of strings to replace it. Just that, in certain circumstances, a full hierarchy of exception types, and several different catch handlers to deal with them when they are essentially the same thing with a different messages, is overkill and clutters the code to no advantage. My example was for a specific sort of situation: a certain procedure could fail in a number of different ways, however the only difference between the ways these events needed to be handled is in the reporting to the user/programmer. Therefore a string-based exception suits the purpose here. It''s the principle of Occam''s Razor: the simplest solution is nearly always the right one.

Share this post


Link to post
Share on other sites
quote:
Original post by Kylotan

I fully appreciate the power of type-checking, thank you very much. As I pointed out above, I never intended my use of strings to replace it. Just that, in certain circumstances, a full hierarchy of exception types, and several different catch handlers to deal with them when they are essentially the same thing with a different messages, is overkill and clutters the code to no advantage. My example was for a specific sort of situation: a certain procedure could fail in a number of different ways, however the only difference between the ways these events needed to be handled is in the reporting to the user/programmer. Therefore a string-based exception suits the purpose here. It's the principle of Occam's Razor: the simplest solution is nearly always the right one.



First, you did use strings to replace type-checking, whether you meant it or not (and BTW no one every really means to replace a language feature, unless they're going slightly screwy. )

I understand that simpler code is usually better (as it fits my own definition, previously mentioned, of good code quite nicely), but in the method you just listed, you are actually doing more work than in the method I provided, which is this:


class exception
{
public:
// use something other than cout if you wish
virtual void report() { cout << typeid(*this).name(); };
};

// any derived classes can be empty and report themselves
class out_of_memory : public exception {};



When you write a class, and you want to add in exception-handling, you just list the exceptions in the header (or add them as you go), and then throw them:


class MyApp
{
public:
// more empty classes
class exception : public ::exception {};
class invalid_mru_position : public exception {};
};

// somewhere...
throw MyApp::invalid_mru_position;



Give the derived classes names to tell the type of the exception (up to 256 characters?). So, MyApp::invalid_mru_position outputs "MyApp::invalid_mru_posiiton". Even if you catch it as an ::exception, it still outputs its true name. RTTI provides an exceptionally easy way of doing this. There's nothing stopping you from adding in a variable for a description, too.

How is that not simple? In fact, it is much simpler to keep track of, even if you are both throwing and catching the exceptions yourself. Even the exception-throwing code is easier to write. And the syntax-checking of the compiler keeps you from making errors that can be frustrating...and on...and on...

What I was trying to point out is that programmers don't use the language very well. Everyone can improve. I agree with the principle you stated (that's why I have been saying, that better code is smaller, faster, cleaner, etc.) but you must be careful about what you call simple. My dictionary defines simple as: "easy to do, understand, use, solve" Sometimes the most obvious solution is not the most simple solution. You cannot call something complex just because it is not obvious.

(Any time you have to use the word "type" to talk about a variable, or name the variable, you should use type-checking instead.)

Also, if the program wishes to handle the exceptions, just catch an ::exception object and be done with it! It is in no way overkill unless you want it to be. You can most certainly throw a MyApp::invalid_mru_position and catch it as the base class, MyApp::exception, or the base class of that, ::exception. You choose exactly how many catch handler you need depending on whether you wish to handle the exceptions similarly, differently, or not at all.

You don't need a full hierarchy of class types. The number of exceptions grows with the number and size of the classes you are using. Obviously, if you are doing a small project, then you will only need one or two (or perhaps a couple) of exceptions. My method is uniform and scales correctly to any size project.

You don't need anymore catch statements than you wish to handle. (Note also that they are not catch handlers but catch statements; that means that more than one catch statement may be necessary to handle different parts of an exception. Stop being subjective with the language and start using the definitions provided with it.) If you wish to do something special based on the type, you can, but you certainly don't have to. In any situation you can mention, my method requires less code, is simpler, less work, less chance for error, and can do EVERTHING that yours could do. The only thing you accomplished was a work-around for type-checking. You didn't eliminate any code. You didn't make it any easier to read. Further, I listed all those things in my example and you didn't disagree. But here you do. Why? PROVE IT. I want examples where it would be harder, hard to read (objectively), more error-prone, etc. Or come up with an easier method.


quote:
Original post by Kylotan

The main criteria I draw upon here is readability. Readability, by nature is subjective: it involves a totally analogue human being reading something If you want to argue that readability should be some sort of standard, then do so. However, those 2 given programmers writing code for themselves will probably have 2 different ideas of what is readable to them.



Yes, they can have two different ideas. However, one or both of them is incorrect. Why? The answer lies in your misconception.

Readability is, by nature, subjective? No, it has two different definitions: the objective and the subjective. The definition used depends solely on the purpose of the atecedent. If that purpose is solely for proper communication, then the objective definition is used: "that can be read; legible", and if that purpose is for emotional or psychological benefit, then the subjective definition is used: "easy or pleasant to read; interesting: Treasure Island is a very readable story. " A term can have only one definition in a given context. As the primary purpose of programming is to complete a program, and not to be emotionally and psychologically stimulating, I refer to the objective definition to find what readability of code should be. You may add something to your programming over and above that (i.e., you like a particular notation), but remember that it is your own choice and has no bearing on whether the code is smaller, or faster, or whatever, and can easily impair the objective readability of the code, because, as you pointed out, others are not likely to like the same things in a subjective manner.

(The purpose of a story, or poem, on the other hand is primarily for entertainment and enjoyment, or some other effect upon the emotional and psychological side of the reader or listener, and so you would use the subjective definition. In fact, you could say that a story only uses the objective definition to achieve its subjective purpose.)

I would like to point out that the language is already given. You are not given the chance to re-define it. You were not asked how it should be when it was made. It is there, and if you wish to use it, if you wish to communicate with the computer, you must use the definitions given. It is similar with English, when used objectively: if I define banana to mean strawberry, and you do not, how can we communicate? In a computer language, you are not communicating with a human being, but with an unemotional computer. Emotions are worthless in a computer language. There aren't any words in a computer language to convey a person's state of mind as he is coding a particular line. It is only when you wish to communicate with something other than computers that your emotions make the slightest difference. However, if you wish to keep the same code form when talking to the computer as to another person in a computer language, you had best stick to the definitions of the language and not your own particular "style." However, comments may be used freely but are at their best when other people can understand them; that is, when they are consistent and expected. Note that comments are not usually used for stories or other emotional things; instead, they are usually used to provide a short description of what the code does.


quote:
Original post by Kylotan

I think this is too vague to answer. I believe there are several tasks in other languages that C++ has -trouble- with, but since C++ is very close to the machine code if you want it to be, you can pretty much do anything that is possible on a computer with C++. But it doesn't mean there is a specific design for everything.



Of course there isn't a specific design for anything! If there was, C++ wouldn't be flexible! The C++ language is just a set of tools -- and tools, when used according to the way they were designed, will achieve maximum efficiency. That is precisely why I asked whether there is a tool missing in the language.

There is only one thing I can think of: RTTI should be extended so that it allows creation of objects from a stored type_info object. Or, perhaps just one of the strings from the type_info object.

How did I run into that problem? My library has its own "GUI", that is, common window classes with different implementations on each OS. So, on Windows your windows and buttons and menus look like the Windows GUI, and on the Mac they look like the Mac's GUI, etc. Anyway, each window contains a Window* to its parent and a linked list of Window* to its children, and the list can write itself to file correctly using the virtual functions of the window and window-derived classes. So, each Windows writes its children to disk, and they write their children to disk, etc. But how can it read in the proper classes, even if the user derived some new classes? It would only re-create Window objects, not the derived objects. So, because of this problem, it is not possible to load a GUI from disk, except to create your own "version" of RTTI for the task. But that shouldn't be. That is why I think the language needs extended in that direction. RTTI is certainly not the solution to every problem, but in that particular problem it would be a life-saver.


quote:
Original post by Kylotan

The same goes for storing graphics in video memory... and what about the idea of taking Exclusive mode under DirectX? Is that wrong too?



Yes, there should be a better method, or at least allow both modes. I think the video memory architecture is tied down by old standards, and should not require exclusive access to function at a reasonable speed. What is a multi-tasking OS good for if you are not going to allow other programs to function while your program is running?


quote:
Original post by Kylotan

I am a hearty believer in logic, trust me. But I contend that we don't always need to evaluate everything in an objective way, nor would we want to.



No, but programming is objective.


quote:
Original post by Kylotan

And if I am communicating with myself, I shall use whatever language suits me best. Do you always write fully-formed sentences when you leave yourself notes? And what about your shopping list? If you do, then more power to you, but you are the exception in taking such an unnecessary step. It's when we deal with others that we need to worry about standards.

(However, I will also take this opportunity to emphasise that communication, or more specifically language, is not just about relaying information: it also fulfills psychological and social needs. Maybe you will reject these needs as being illogical. But they are there nonetheless.)



Well, first of all, you are making for yourself two styles of coding, which will inevitably introduce complications to your coding. Simpler is better, right?

(I don't write a shopping list, because I go to the store for quick items only (milk, bread, etc.). I always use fully formed sentences when writing myself notes, partly because I am absent-minded, and partly because writing improves with use. )

Second, you are exactly right in saying that dealing with others requires standards. Also, when you talk to yourself (or write notes or otherwise communicate), you may use whatever language you deem fit. But have you forgotten that you are dealing with a computer? Your coding is not writing yourself notes on how to call functions and what-not...

Third, programming is primarily a means of conveying information to the computer about how it should run. It is not about writing poetry. It would be illogical to say that good code is dependent on the user's subjective opinion, whether it is readable, clean, fast, clunky, whatever. Only facts matter with coding.



Since I'm finding that I have to explain, in every reply, the method of exception handling that I would like to use in my library, I am going to show exactly how it works.


Step 1 - The user starts coding his app, adds in a lot of code, and it compiles fine. (It does have some hard-to-recognize bugs, and some classes are simply not being used correctly because he claims he doesn't have time to read the manual.)

Step 2 - The user attempts to run his program, and the bugs affect program flow, in a way that wasn't intended. Also, the classes that aren't being used properly throw some exceptions, etc.

Step 3 - At the first exception, however, the user has placed no catch() statements of any kind. However, the main(), hidden by my library, has a catch(exception) statement that catches that first exception, and the code also cleans up along the way. Then, RTTI and the virtual report() function are used to display the derived type of exception and its description.

Step 4 - Armed with this knowledge, the programmer looks up in the docs, under the class name, and reads a good description of the problem, and easily fixes his code. He can add in catch() statements wherever he likes, and catch any exceptions that he wants. This allows him to check the values any data member by output-ing it in the catch handler. (This may be especially useful for people who don't have debuggers.)

Step 5 - Rinse and repeat.

Step 6 - The user is now assured of a perfectly working program.


This method of exception-handling just forces the user to read the docs and use the classes correctly, or it simply won't run.




- null_pointer
Sabre Multimedia

Edited by - null_pointer on 5/4/00 5:53:50 PM

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!