• Advertisement
Sign in to follow this  

How do I do this without using goto statements?

This topic is 3529 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Quote:
Original post by Daaark
Just leave it. It's fine like that. goto is only bad if abused. It comes in handy in certain situations, hat's why it's still included in modern languages.


Not any languages that don't try to look like C, as far as I know. And even then, Java doesn't have it. Some modern languages have it, but most don't.

Anyways, comparing a boolean with true (or false) is redundant, a bit like saying "if the sky is blue is true". For C++, it's actually dangerous to do: true is defined as 1 and false is defined as 0, but "truthfulness" is defined as "anything that isn't 0", ie:

if(7) cout << "seven is true" << endl;
else cout << "seven is false" << endl;

if(7 == true) cout << "seven equals true" << endl;
else cout << "seven does not equal true" << endl;

prints

seven is true
seven does not equal true

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Quote:

That's what C/C++ function call overhead is all about...

What overhead? Most modern compilers are fairly good at inlining. If function call overhead scares you, I assume you don't use the standard library at all either? Most of that relies pretty heavily on the compiler being able to inline your code. std::vector would be a lousy data structure if you couldn't count on your compiler inlining access to it.

And as always, how about profiling before worrying about performance?

Nice, so let's just rely on compilers and MS to do shitty things for us. That's why we got .NET with 5+ second start-up, simplicity for total morons and crap-o-matic performance.

Is it so hard to read a good book on PC architecture and write a code that doesn't need some uber-optimization and heavy reliance on compilers just to work in the first place?

Ok, I'll agree with the fact that this is a minor optimization and in 99% cases it wont make a difference anyway. But usually that one percent of cases is iterating through one byte at a time and calls a function just because goto was considered bad. I've seen such cases where rewriting few lines of code in assembly gave a huge gain. We are talking about 1 saved minute from 20 total on a high end machine.

Yes, it's not some x^2 gain or something, but if you just think, 'oh whateva', everywhere, those minutes add up, and sooner or later one server farm is crunching pretty and useless code for a whole week in a year.

So is it so difficult to just don't add stuff that might or might not be optimized out on compiler x, especially if readability is not affected? And is it much more easier to jump to function def every time you see a nested function call? And talk about time you waste to figure out how to write pretty yet production safe code...

And concerning optimization, take a good look on VTune call graph from VC9.0 application compiled with full optimization, it's not that optimized if you ask me...

And before you quote me, the whole point was not about how this line of code will affect the performance, the point was about how you should think about programming as a whole.

In the olden days people knew what was happening in hardware, they were not afraid of evil constructs because they knew what was happening on bit level. Nowadays we have ton of crapastic coders that have no idea what is happening on CPU, and so we have tons of buggy software written in managed languages that are 1000x more unstable than '89 software that was written by 1 person in assembly in comparable amount of time.

So the end result, we have supposedly pretty code that doesn't work, low quality software all around, and ton of fractions fighting how line of code should be written correctly.

Quote:

Quote:

Don't invent some frighteningly insane constructs just to avoid goto, if it's clear and valid, use it.

If you think a user-defined function is an "frighteningly insane", I think we have bigger issues than how to refactor a nested loop... [grin]

Quote:
His goto is not hard to follow. It's like a 3 line jump, and it's plain as day. Spoon, you're like a lot of people here who love to over engineer solutions to simple problems. You talk about hard to follow things, and then you suggest a completely un-needed, more complex work around to solve a problem that doesn't even exist!

How do I know that no other goto will jump to the same label?
I can't tell that just by reading those three lines of code. That's hardly plain as day, is it?
Sure, it's easy to see that this goto will jump to the label a few lines under it. It's less easy to see, when looking at the label, which goto's will jump to it. I can see one of them, but I have no easy way of determining if there are others.

And if you think a function call is hard to follow, then I don't think you're one to talk about who "should not even be a coder in the first place". [wink]
The same applies to function call... You can't just throw it out or modify a function in a 100000+ line program, you have to do a lookup anyway. Moreover there can be jump tables to functions and macros as well, it's not that simple as you think.

And that is the reason why we have (comments):
goto loop_escape; //we brake out of loop because yyyy

goto is by no way worse than function because if even foo(x) is by no means safe, it could acually be macro that goes boom if x is signed [wink]

Software development is hard, see example of pretty goto-less code below...

some_cool_cross_platform.h
typedef void (*fptr) (int);
#define foo(x) (x*x)
#define xyz(x) foo(x*2)
#define foo(x) (fptr)(foo+4)(x)

your_fancy.h
xyz(x) if(x==1)return;//goto is evil, functions are way to go

main.cpp
#include "some_cool_cross_platform.h"
...
for(int i=0; i<foobar(x); i++)
{
for(int j=100; j>bar(y); i--) xyz(j);
}
...

All of the non newb programs have ton of stuff like that to worry about. So just leave the uber-simple goto where it is simple and worry about other things that really matter.

Share this post


Link to post
Share on other sites
Quote:
Original post by _Madman_
we have tons of buggy software written in managed languages that are 1000x more unstable than '89 software that was written by 1 person in assembly in comparable amount of time.


I won't comment on the "buggy" part, but the size and complexity of the software isn't comparable. More expressive higher-level languages were created for a reason...

Share this post


Link to post
Share on other sites
Quote:
Original post by _Madman_


Welcome! The year is 2008. Our mobile phones are more powerful than the computers that existed when function overhead was even arguably a significant portion of running time.

Quote:

Nice, so let's just rely on compilers and MS to do shitty things for us. That's why we got .NET with 5+ second start-up, simplicity for total morons and crap-o-matic performance.


Except that it has been shown that .NET code runs at comparable speed to C++ code. The simplicity of language design has been shown to reduce development time significantly (which of course will vary depending on project, developer, etc). This is all well known. Well documented. Google is your friend.

Oh, and I'd love to see what app takes 5 seconds to load in .NET that doesn't have similar load/initialization times in an unmanaged language.

Quote:

Is it so hard to read a good book on PC architecture and write a code that doesn't need some uber-optimization and heavy reliance on compilers just to work in the first place?


Yes, it has been shown time and again that writing correct code is hard. Writing correct performant code is damned hard. Computers are good at analysis and mechanical processing. Optimization is by and large just that. Why waste your time doing something that the computer can do thousands of times faster, perfectly for you?

Quote:

I've seen such cases where rewriting few lines of code in assembly gave a huge gain. We are talking about 1 saved minute from 20 total on a high end machine.


When? What case? I've not seen assembly win out anywhere near 5% for any non-trivial program in about 10 years. Optimizers have gotten damned good.

Quote:

Yes, it's not some x^2 gain or something, but if you just think, 'oh whateva', everywhere, those minutes add up, and sooner or later one server farm is crunching pretty and useless code for a whole week in a year.


A server farm spending a week is a whole hell of a lot cheaper (and less annoying) than the number of man weeks spent writing, debugging and maintaining assembly.

Quote:

And before you quote me, the whole point was not about how this line of code will affect the performance, the point was about how you should think about programming as a whole.


And niggling little optimizations are not how you should think about programming as a whole. Quite the opposite.

Quote:

In the olden days people knew what was happening in hardware, they were not afraid of evil constructs because they knew what was happening on bit level.


I call bullshit. Just because they knew internally what was going on (and were perhaps better at deciphering 'evil constructs') doesn't mean they dreaded dealing with some steaming pile of code any less.

And that thought process ignores the practical implications of trying to make your entire development team know at the bit level what the code is doing. As programs become increasingly complex (and business/user requirements demand that they do) abstraction is needed to keep the scope of the program workable for 'programmers aware of the limited size of their own skull'.

Quote:

Nowadays we have ton of crapastic coders that have no idea what is happening on CPU, and so we have tons of buggy software written in managed languages that are 1000x more unstable than '89 software that was written by 1 person in assembly in comparable amount of time.


I call bullshit. There was just as high a percentage of buggy, unwieldy, unstable code in '89 as today. One example of great code does not define a time period. And I'd love to compare the relative complexity of that app versus a run of the mill app today.

Share this post


Link to post
Share on other sites
Quote:
Original post by Telastyn
Quote:
Original post by _Madman_


Welcome! The year is 2008. Our mobile phones are more powerful than the computers that existed when function overhead was even arguably a significant portion of running time.

Funny you should mention that, because most of the problems can't be divided on multiple cores and we are stuck somewhere at 3Ghz for 3 years now...

Maybe you can send some of those 15GHz PC's my way, because every day I monitor servers that are fighting with linear tasks for weeks.
Quote:

Quote:

Nice, so let's just rely on compilers and MS to do shitty things for us. That's why we got .NET with 5+ second start-up, simplicity for total morons and crap-o-matic performance.


Except that it has been shown that .NET code runs at comparable speed to C++ code. The simplicity of language design has been shown to reduce development time significantly (which of course will vary depending on project, developer, etc). This is all well known. Well documented. Google is your friend.
Yea, it really cuts down time for newbs to write newb software, like outputting data to grid... Of course the performance in this case matters too. And .NET is so fast that everyone is writing OS'es in it nowadays. And another level of indirection actually make things faster, that has been proven and is one of the fundamental laws of universe...

Wonder why hardcore guys are complaining MS that they have dropped C++ docs lately, and why they finally decided to release TR1 pack... Must have been a glitch...

Quote:

Oh, and I'd love to see what app takes 5 seconds to load in .NET that doesn't have similar load/initialization times in an unmanaged language.

It's rather easy really, take a look at the amount of DLL's that needs to be loaded to start .NET application, it's about 50MB, all of them needs to be parsed, loaded into memory and their addresses adjusted.

Take a look at MFC9.0 samples. See a difference?
Quote:

Quote:

Is it so hard to read a good book on PC architecture and write a code that doesn't need some uber-optimization and heavy reliance on compilers just to work in the first place?


Yes, it has been shown time and again that writing correct code is hard. Writing correct performant code is damned hard. Computers are good at analysis and mechanical processing. Optimization is by and large just that. Why waste your time doing something that the computer can do thousands of times faster, perfectly for you?

My point was that you don't need to optimize stuff for hours, just by knowing fundamental things like how PC operates improves performance of your code by 30%-50% and doesn't increase the development time.
Quote:

Quote:

I've seen such cases where rewriting few lines of code in assembly gave a huge gain. We are talking about 1 saved minute from 20 total on a high end machine.


When? What case? I've not seen assembly win out anywhere near 5% for any non-trivial program in about 10 years. Optimizers have gotten damned good.

There are a lot of intrinsics that are available in visual C++, all of them are not emitted in regular compilation. The list is pretty lengthy ;) Some things like operations on byte bits can take considerably more space and time in C than it takes with intrinsic or asm. Multiply bitfields with 10000, and you get the result.

Same with SSE2/3 and floats.

Quote:

Quote:

Yes, it's not some x^2 gain or something, but if you just think, 'oh whateva', everywhere, those minutes add up, and sooner or later one server farm is crunching pretty and useless code for a whole week in a year.


A server farm spending a week is a whole hell of a lot cheaper (and less annoying) than the number of man weeks spent writing, debugging and maintaining assembly.
Remember, locally critical code is just 1% of the whole code ;)

Quote:

Quote:

And before you quote me, the whole point was not about how this line of code will affect the performance, the point was about how you should think about programming as a whole.


And niggling little optimizations are not how you should think about programming as a whole. Quite the opposite.
It wasn't about little optimizations. It was about how compiler is not wondermaker and how writing CPU aware code takes same amount of time and 20% less resources as a general rule.

Quote:

Quote:

In the olden days people knew what was happening in hardware, they were not afraid of evil constructs because they knew what was happening on bit level.


I call bullshit. Just because they knew internally what was going on (and were perhaps better at deciphering 'evil constructs') doesn't mean they dreaded dealing with some steaming pile of code any less.

And that thought process ignores the practical implications of trying to make your entire development team know at the bit level what the code is doing. As programs become increasingly complex (and business/user requirements demand that they do) abstraction is needed to keep the scope of the program workable for 'programmers aware of the limited size of their own skull'.
Simple example. SQL Server 2000 - native application without noticeable bugs, works fast. SQL Server 2005 - application with trendy .NET runtimes, managed stuff and all the coolness... Exceptions are a common thing when you do more than select and performance sucks..

Quote:

Quote:

Nowadays we have ton of crapastic coders that have no idea what is happening on CPU, and so we have tons of buggy software written in managed languages that are 1000x more unstable than '89 software that was written by 1 person in assembly in comparable amount of time.


I call bullshit. There was just as high a percentage of buggy, unwieldy, unstable code in '89 as today. One example of great code does not define a time period. And I'd love to compare the relative complexity of that app versus a run of the mill app today.
[/quote]Even if so, the software was not more buggy than it is today ;)

Share this post


Link to post
Share on other sites
Quote:
Original post by Zahlman
What does the "activity diagram" look like which results in the desire for this goto?


I don't have any web space to store the image. I can email it to you if you pm me your email address.

Share this post


Link to post
Share on other sites
Quote:
Original post by _Madman_
Funny you should mention that, because most of the problems can't be divided on multiple cores and we are stuck somewhere at 3Ghz for 3 years now...

Maybe you can send some of those 15GHz PC's my way, because every day I monitor servers that are fighting with linear tasks for weeks.


Most? Few.

But my point was mostly that a vast array of programs which required optimization to run acceptably has changed over the years to the point now where very many need no optimization to run acceptably. Since goal #1 of a program is to meet its requirements (like running acceptably), the priority of optimization in the grand picture has also lessened over the years.

Of course there will be outliers like the one you describe. Though don't you think the daily ordeal with your lengthy linear operations might sway you a little in your outlook of what's important? Maybe? Heh.

Quote:

Yea, it really cuts down time for newbs to write newb software, like outputting data to grid... Of course the performance in this case matters too. And .NET is so fast that everyone is writing OS'es in it nowadays. And another level of indirection actually make things faster, that has been proven and is one of the fundamental laws of universe...


It cuts down time for experts to write exceptionally difficult software more. More typos caught in the compiler, more bad code prevented by good language design, less tedium futzing about with manual memory management, (generally) better IDE and tool support...

It's plenty fast for OSes, though bit fiddling with hardware is better suited for other languages. A small price to pay so software that common programmers actually write is better.

Quote:

Wonder why hardcore guys are complaining MS that they have dropped C++ docs lately, and why they finally decided to release TR1 pack... Must have been a glitch...


Nope. 'Hardcore' isn't exactly my crowd. Neither is C++ on windows.

Quote:

It's rather easy really, take a look at the amount of DLL's that needs to be loaded to start .NET application, it's about 50MB, all of them needs to be parsed, loaded into memory and their addresses adjusted.

Take a look at MFC9.0 samples. See a difference?


Programs involve more than a UI. Toss in the C++ Standard Library and a quarter of boost and you might have some comparison. Oh, and a few seconds of user time is nothing compared to saving the eye gouging agony that is MFC.

Quote:

My point was that you don't need to optimize stuff for hours, just by knowing fundamental things like how PC operates improves performance of your code by 30%-50% and doesn't increase the development time.


Link to the study that found that?

Quote:

There are a lot of intrinsics that are available in visual C++, all of them are not emitted in regular compilation. The list is pretty lengthy ;) Some things like operations on byte bits can take considerably more space and time in C than it takes with intrinsic or asm. Multiply bitfields with 10000, and you get the result.

Same with SSE2/3 and floats.


You mean stuff that VMs can do for you without dropping into asm? Amazing.

Quote:

Simple example. SQL Server 2000 - native application without noticeable bugs, works fast. SQL Server 2005 - application with trendy .NET runtimes, managed stuff and all the coolness... Exceptions are a common thing when you do more than select and performance sucks..


So, you're comparing a 6th generation app which was originally stolen from an already stable product; comparing it to a initial release of a product made by Microsoft in a relatively new programming language (at the time) using a relatively new VM (at the time) and expecting similar reliability and performance? Call me in 8 years when .NET MSSQL has had the same time to mature. Or maybe pull up Sybase 1.0 and compare that to MSSQL 2005.



A program's first goal is to meet its requirements. If it does not work; if it does not meet its requirements, it's pretty much useless.

A program's second goal is to be maintainable. If you can't fix a program when external stuff inevitably changes, it will quickly cease to work; violating the first goal.

A program's third goal is to be flexible. Requirements usually change. More requirements usually come about. Sometimes a second program can help, but usually it's on the original program to adapt. Again, failure to do so will lead the program to no longer meet the new requirements and you get to write a new one.

A program's final goal is to be fast. Once you know it works, you know you can keep it working, you're pretty sure you can adapt to new problems, then maybe you can care about how long it takes to run. Is speed sometimes a requirement, putting it into the basic first goal? Sure. Sometimes. Otherwise it's gravy. Treat it as such.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement