#define gives too much control?

Started by
9 comments, last by BeanDog 17 years, 11 months ago
Don't you think that #define gives the programmer too much control of the language? I mean, it CAN prove useful in some situations, but someone might misuse it and end up with:

function(int, main) takes var(int, argc) and var(char pointer pointer, argv) nothing_else
	if (1 < 2)
		cout << "1 < 2" << endl;
		cout << "Test Successful" << endl;
	end
	else
		cout << "1 >= 2" << endl;
		cout << "Test Unsuccessful" << endl;
	end
end

Just thinking about this... What are your thoughts?
Advertisement
I think this belongs in general programming not the lounge
"I thought what I'd do was, I'd pretend I was one of those deaf-mutes." - the Laughing Man
Compared to other languages (e.g. LISP), C preprocessor macros are pathetically limited (BOOST_PP pretty much hits the limit of what you can do with them). They do not really give you any "control" over the language at all: the end result is always isomorphic to C, at best using different "keywords".
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." — Brian W. Kernighan
Quote:Original post by Fruny
Compared to other languages (e.g. LISP), C preprocessor macros are pathetically limited (BOOST_PP pretty much hits the limit of what you can do with them). They do not really give you any "control" over the language at all: the end result is always isomorphic to C, at best using different "keywords".


Interesting. Well, I know you're not changing the language itself, it's just that you can write who-knows-what and yet have it compile the Right Way using macros.

Sorry about the forum, I messed up.
Quote:Original post by agi_shi
Don't you think that #define gives the programmer too much control of the language? I mean, it CAN prove useful in some situations, but someone might misuse it and end up with:

C and C++ have always been based on the principle that the programmer knows what they're doing. Whether this is a good principle or not depends on who you ask...
I think functions, variables, function parameters, return values, the logical and operator, arrays, indices, switch-statements, etc. gives too much control. The programmer might do this (typed in at GDNet, might have minor errors):
unsigned int add(unsigned int x,unsigned int y){ unsigned int look_up[10*10]; for( unsigned int index = 0;index < 10*10;++index) {  look_up[index] = index%10 + index/10; } unsigned int index = std::numeric_limits<unsigned int>::max(); bool x_in_range = true, y_in_range = true; switch( x ) {  case 0:   index = 0;   break;  case 1:   index = 1;   break;  case 2:   index = 2;   break;  case 3:   index = 3;   break;  case 4:   index = 4;   break;  case 5:   index = 5;   break;  case 6:   index = 6;   break;  case 7:   index = 7;   break;  case 8:   index = 8;   break;  case 9:   index = 9;   break;  default:   x_in_range = false;   break; } switch( y ) {  case 0:   index += 0;   break;  case 1:   index += 10;   break;  case 2:   index += 20;   break;  case 3:   index += 30;   break;  case 4:   index += 40;   break;  case 5:   index += 50;   break;  case 6:   index += 60;   break;  case 7:   index += 70;   break;  case 8:   index += 80;   break;  case 9:   index += 90;   break;  default:   y_in_range = false;   break; } if( x_in_range && y_in_range ) {   return look_up[index]; } return x+y;}


Only a bad programmer would do this of course, but the programmer can do this. This doesn't make functions, variables, function parameters, return values, the logical and operator, arrays, indices, switch-statements, etc. bad features. The programmer is responsible, the language won't make good code for the programmer.
Quote:Original post by agi_shi
Don't you think that #define gives the programmer too much control of the language? I mean, it CAN prove useful in some situations, but someone might misuse it and end up with:
*** Source Snippet Removed ***

Just thinking about this...

What are your thoughts?


I think they don't give enough control.

They give plenty of power, but the C pre processor provides very little control to the programmer.

You can't even pop/push a pre processor frame -- and the C pre processor is turing incomplete.

Note that C++ templates adds alot of pre processor like functionality that can be useful. C++ templates still have issues.

Even the examples you showed don't really work -- well, they work in the narrowest of senses (you can make a C pre processor make that code compile), but they don't work in the sense that "the user doesn't need to understand the steps the C preprocessor walks through).

What am I talking about? Well...

Imagine if you tossed in
1> Truely multi-like #define commands.
2> The ability to push, pop, copy, save and load pre-processor token substitution rulesets
3> The ability to do non-prefix operations

A C-like preprocessing language that actually gave the programmer some control would look like:
#rulespace decl_rulespace#define pointer *#endrulespace#rulespace function_body_rulespace#endrulespace#rulespace function_rulespace#define takes (#build var(type, value)#push rulespace decl_rulespace#output type args#pop rulespace decl_rulespace#endbuild#define and ,#build nothing_else#output ) {#pop rulespace function_rulespace#push rulespace function_body_rulespace#endbuild#endrulespace#build function(A, B)#output A B#push rulespace decl_rulespace#endbuild


Now the programmer has the ability to locally change the language, as opposed to the current method which gives the programmer the power to change the language, but very little control over the scope of the change.
Actually compared to many languages, #define gives very, very little power *over the language* beyond the superficial. Off the top of my head I can name Squeak, OCaml's camlp4 preprocessor pretty printer and Lisp macros which all giving leagues more power and fathoms more functionality.

EDIT - P.S. one reason syntax extensions is useful is to be able to easily create a domain specific language without implementing some minimally functional intepreter and also having all the goodies of the mother language like type inference, static typing etc.

[Edited by - Daerax on May 21, 2006 5:28:41 PM]
Quote:Original post by agi_shi
Don't you think that #define gives the programmer too much control of the language?


No. Any useful language is complex enough that it lets you do something wrong, trying to prevent that is an impossible task. Time spent trying to make a language idiot proof is time wasted against the idiots, and anyone "rewriting" the basic way they code for no good reason is definately an idiot, in my opinion.

(Safety features, those are okay. Ways for the programmer to let himself know he's made a mistake. From access level specifiers to asserts to unit tests, these are all useful tools for helping in this task. However, they are all circumventable, and no effort should be spent trying to force the programmer to use them within the language. You can lead a horse to water, but you can't force it to drink).
And why not do something like this:

foo.py
types = [ "char", "int", "long", "float", "double" ]func = """void foo(%s a, %s b){   // do stuff with a and b}"""for t1 in types:   for t2 in types:      print func % (t1, t2)


then python foo.py | g++ -c -o foo.o ?

The only real advantage the C preprocessor has is that it is automatically called by both C and C++ compilers on their source. There is no reason why you couldn't generate the "final" source code that the compiler sees with another tool.

Numpy (a python extension) does use python scripts to generate C code. This code is then compiled into a library. I am sure many others do similar things.
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." — Brian W. Kernighan

This topic is closed to new replies.

Advertisement