# Separating Declaration and Definition?

This topic is 3376 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I was wondering what was everyones opinion and also the original purpose, for programming language to make you separate your data from your code? I know Ada does, I believe Pascal does, and if I remember correctly it was also common style in C to put all your variables and such at the top of your function. Now I always see logic code mixed with variable declarations and such. Why dont people follow that style of separating the two any more? Why did people originally separate them?

##### Share on other sites
Basically, because compilers written that way were smaller and less complicated. This was a big deal back when memory was measured in kilobytes rather than gigabytes.

##### Share on other sites
Quote:
 Original post by Falling SkyWhy dont people follow that style of separating the two any more?

Because the new style is both more readable and sometimes faster, for example:
Foo x;x = Foo(1, 2, 3);// ...Foo y(4, 5, 6);

In this code, x is default initialized and then assigned to, which is less efficient than y which is initialized directly with the values.

##### Share on other sites
Quote:
 Original post by Falling SkyNow I always see logic code mixed with variable declarations and such. Why don't people follow that style of separating the two any more? Why did people originally separate them?
When I first started using C, that was the way it had to be done. You declared your variables at the top of the function, and then wrote the logic for them after that.

When C++ allowed you to mix them, C compilers added the functionality. I think this may have come in C99.

I usually declare my big/major/driving variables at the top of a function, and I only declare minor or temp variables as needed, like something for a loop, which I will declare right in the for statement, unless I have a reason not to.

##### Share on other sites
When im writing something from scratch I will put all variables at the top and initialise on declaration, until that algo is written. After I've tested it I may move the variables to the scope where they are used, or I may not, depending on which I think will be more readable if I come back in a month. But I almost always initialise on declaration, i.e. int x = 0;

##### Share on other sites
It's because if you have all your variable declarations at the top of a scope, you can compile in one pass. That's really good for the computers that C initially ran on (ie, slow with little memory). Nowadays, it's not such a big problem.

##### Share on other sites
BTW this has nothing to do with separating declaration and definition (that's the header file stuff). This is separating local variable declarations (like int i;) and statements (like i = 5;).

##### Share on other sites
In class driven languages like C++/C#, the cost of initializing an object of a class can be expensive, if the class describes very complex objects. Hence delaying such definitions in a function until they are needed, may improve the flow of your program say, if an error occurs in a function and thus needs to return immediately, thus making all that code to set up and initialize all those objects at the start of the function, a complete and utter waste of time.

Of course, this isn't too much of an issue for simple data types, but I have switched from defining all my objects and variables at the start of my functions, to defining them only when they are needed.

If I want a function to remain particularly tight, I may even use function arguments, passed by value and who have served their purpose, to hold new values in the function, thus totally bypassing the need to define yet more local objects.

##### Share on other sites
Quote:
 Original post by CodeStormIf I want a function to remain particularly tight, I may even use function arguments, passed by value and who have served their purpose, to hold new values in the function, thus totally bypassing the need to define yet more local objects.

As a compiler designer, I have three words for you: Stop. Torturing. Optimizers.

Your average optimizer can and will notice that you don't use a value anymore. It's textbook program analysis and has been for two decades now. If you reuse a variable for another purpose, you defeat the optimizer, forcing the compiler to keep around the original variable until you assign to it again because otherwise something might break.

##### Share on other sites
Note that Pascal and C were designed around 1978 with strong influence from Algol.

The basic idea behind declaring everything upfront was that compiler could layout memory without knowing the algorithm itself, and then use those fixed locations within function.

C still has register keyword, which served two purposes. It told compiler to allocate that particular variable in register, and that all access to that variable should be done exactly as specified in code, since each access could have side effects. As such, it also performed the function of volatile (see Duff's device).

In Pascal, there's distinction between function and procedure. The reason for this is return value. In Pascal, result of function is a variable which exists throughout the entire function. This avoided parsing the function body to determine what and how it returns, if at all. While not strictly necessary, given the automatic int substitution, perhaps the intention was to define structure explicitly.

Nowadays, none of this is necessary. Current recommendations are to define variables at the point where they are needed. While it likely has little to no impact on compiler's optimization capabilities, it makes code clearer.

Those that believe that re-using variables is smart have likely never worked on Pascal or Delphi projects, or were lucky enough not to notice it. There's nothing more horrible than having i being for loop index, accumulator elsewhere and integer representation of a pointer storing the callback, all in same function. It's horrible, just don't do it.

Quote:
 In class driven languages like C++/C#, the cost of initializing an object of a class can be expensive, if the class describes very complex objects

For C++ in particular, you don't pay for what you don't use. So feel free to declare as many stack allocated variables as you wish.
Even more, C++ is free to not allocate structure it doesn't need. Compiler is perfectly capable of analyzing what causes side-effects and what doesn't. As such, the cost of unused objects is zero and trivial objects may be only partially constructed. Example:
void foo(const Message & m) {  print(m.c);};struct Message { int a, b, c, d; };...Message msg;msg.a = 10;msg.b = 20;msg.c = 30;msg.d = 40;foo(msg);// Compiles intoprint(30);// compiler is clever enough to know there are no side-effects

In Java, unused variables are not compiled into class file. In addition, variables that are never read are reported as warning or error. Objects need to be initialized with new, so there is no accidental construction, unless user performs it. And if they do, compiler will report (variable is assigned but never read). So that is non-issue.

I never looked into CLR details, but aside from stack allocated types, I would presume it's similar to JVM.

Long story short - don't re-use auto-allocated or stack-based variables. They really are a non-issue for any reasonable compiler.

##### Share on other sites
Quote:
 Original post by AntheusNote that Pascal and C were designed around 1978 with strong influence from Algol.

Note that C was in production use by 1970. It was used to port the Unix operating system to new hardware that year.

Also, note that FORTRAN, in first production use in 1952, allowed variable to be declared at first use anywhere in the body of code, just like with C++ (FORTRAN did not, in fact, allow you to declare variables separately from first use until, what, Fortran 77?). The enforced placement at the top of a function definition was never due to the restructions of running on slow processors with little memory. It was purely due to theoretical reasons about how to write programs better.

As Antheus said, it was the heritage of Algol. Algol was designed by a committee of academics who knew far more about how to write good software than the folks on the ground who actually wrote and maintauned that stuff. Those folks mostly used COBOL wherein you declared all your variables separately in the Data Division and used them later in the Procedure Division. We all know Algol caught on like wildfire and replaced COBOL.

##### Share on other sites
Quote:
Quote:
 In class driven languages like C++/C#, the cost of initializing an object of a class can be expensive, if the class describes very complex objects

For C++ in particular, you don't pay for what you don't use. So feel free to declare as many stack allocated variables as you wish.
Even more, C++ is free to not allocate structure it doesn't need. Compiler is perfectly capable of analyzing what causes side-effects and what doesn't. As such, the cost of unused objects is zero and trivial objects may be only partially constructed. Example:
void foo(const Message & m) {  print(m.c);};struct Message { int a, b, c, d; };...Message msg;msg.a = 10;msg.b = 20;msg.c = 30;msg.d = 40;foo(msg);// Compiles intoprint(30);// compiler is clever enough to know there are no side-effects
I think he means code like this, not unused variables:
class big_class{...};function(int u){    big_class a, b, c;    a = c;    if(u) return;    c = b;    if(u == 42) return;    b = a;}
My complier does create the classes where they are declared.

##### Share on other sites
Quote:
 Original post by BregmaAlso, note that FORTRAN, in first production use in 1952, allowed variable to be declared at first use anywhere in the body of code, just like with C++ (FORTRAN did not, in fact, allow you to declare variables separately from first use until, what, Fortran 77?).

Unfortunately, that's not an apples to apples comparison. Fortran didn't even have stack allocation in it's original incarnations; every variable was effectively a global. Fortran variables also encoded type in the first letter of the identifiers and restricted identifier length to, IIRC, 4 characters originally, then up to 6 characters in Fortran 66, and didn't even have case (every character was uppercase, leading to great confusion between O and 0 in Fortran programs). When you get to Fortran 77, you finally got to have variables that didn't obey the first character type rule, however, in order to declare a variable that didn't you were required to declare them separate from first use.

##### Share on other sites
You can reuse variables or not, because either way the compiler likely doesn't care. For a compiler that uses Static Single Assignment, each definition (write) of a variable will be seen as a new variable anyway, which simplifies calculating live ranges. You're not harming or helping the compiler, although you may be confusing yourself in the long run.

##### Share on other sites
Quote:
 Original post by Falling SkyWhy dont people follow that style of separating the two any more?

Because they don't have to.

Quote:
 Why did people originally separate them?