why is C++ still being over-used?

Started by
257 comments, last by jbadams 15 years, 6 months ago
I like the poster justifying the memory leaks by saying its valid code and that the current use metrics preclude that leak from being a bonified mission critical problem.

Heres an idea! Fix the memory leak. Then its neither a mission critical problem nor a regular old less critical problem.

Seriously...

If you feel the need to write-off memory leaks as "not a problem" then we have languages designed with people like you in mind.
Advertisement
Yes, there are many constraints that go into the general discussion between effective QA and correct programs.

Some apps can and will leak like sieves but that's acceptable for their use. In the run of the mill commercial app, memory leaks in regular use is unacceptable. If the leaks are not caught, QA is not looking for them, or doesn't have sufficient code coverage. Will leaks in corner cases still exist even with 'acceptable' code coverage? Yeah, in most real world apps.

I took 'it leaks memory' to be something that occurs in common use (every time) and that the software was non-trivial commercial software. Things will of course change if those assumptions change.


But that's all kinda arguing over details, and isn't limited to just C++. QA still needs to check for leaks in managed code.
Quote:Original post by Rockoon1

If you feel the need to write-off memory leaks as "not a problem" then we have languages designed with people like you in mind.


What about OS resource leaks? Those in drivers, for example.

Can you, with 100.0% certainty claim that none of your code has even a single resource leak? That it is impossible for it to leak resources?

Are you sure that none of your users are running NetLimiter which causes memory corruption each time your application uses sockets, yet the corruption is only rarely fatal, so much in fact, that it goes unnoticed except for a handful of users?

If your code is perfect - can you guarantee that it will never be ran on a faulty processor, such as old Pentium fdiv bug?

Where do you set your bar?
Drifting off topic somewhat, aren't we?
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
Quote:Original post by Antheus
In how many ways can the above code fail?


It depends on the assumptions we make about the environment. For instance, we don't know which files will actually be included, and what they actually contain. This is not a theoretical issue when you start including things like "gl.h", or when you have multiple compilers or SDKs installed, or when you are cross-compiling.

We don't know if "main" is the entry point of the program. MS Visual C++ seems to have many other kinds of entry points, depending on the application type.

We don't know the command line used to compile this. It is possible to define macros on the command-line, which means we don't know if "main" is really main.

Even <iostream> and <ifstream> may contain nasty surprises when it comes to macros. I know I had surprises with identifiers like "min", "max", "None" and "Current".

About the code itself, I am not sure if it will actually terminate. What happens if the file contains something that's not a digit or white space?

We don't know if the int type is large enough for values up to 1000 (what if we are compiling on an 8-bit architecture?).

The output of the program does not contain any white space, which makes it rather user-aggressive. Moreover it is not flushed and since your main() function does not always return something, I am not sure if the output will be automatically flushed when the program terminates.

There is no explicit requirement on error handling, but if the "while(f)" condition becomes false because of something else than an EOF, the user won't know about it.

Quote:
How would you fix those problems?

I could start by not using C++, whose pre-processor is evil and cannot be avoided. I could also use a programming language whose specification tells me what the basic types are able to contain.

Quote:
If the code passes above use cases, is it correct?

It's correct if it fulfills its specification. If these two use cases make up the entire specification, then I guess it's correct, modulo the issues I have raised. I doubt the user would consider it correct, though. In particular, requirements on error-handling are missing. If there really are none, then the "cat" command would do the job.

Quote:
Does there exist "perfect" solution, or are they all just a compromise?

Many of the problems I have detected would also be present with other languages, except for some of the issues related to macros which only C and C++ have. My feeling is that there are better compromises than C++ out there, even though they may not be the perfect solution they sometimes claim to be.
-- Top10 Racing Simulation needs more developers!http://www.top10-racing.org
Quote:
Quote:Original post by Antheus
Is the following piece of code:
- valid C++

I'd have to wade through my copy of the standard to answer this definitively.
- correct

I would argue no. Use case 2 seems to make it clear that the outputting of an uninitialized variable at the end of the program is undesired. The given use cases also imply more use cases which should be considered, as they specify a user entering these commands, and provide a .txt file, which clearly indicates the file is meant to be directly editable by humans.

Quote:In how many ways can the above code fail?

Aside from the above, including the missing use cases:

1) input.txt is typeoed or does not exist.
As is, the program will produce no error message, and return 0 (success).

2) No arguments, or more than one argument, are provided.
As is, the program will produce no error message, and return 1.

3) input.txt can contain non-numeric entries.
As is, the program will produce no error message, and return 0 (success) after printing everything before that point -- and the uninitialized variable once as well. This is completely indistungishable from a normal success without knowing the contents of input.txt.

Quote:How would you fix those problems?

By moving the test of f.good() between it's read and it's output, introducing a branch testing f.is_open(), and adding error messages and non-zero return codes.

Quote:If the code passes above use cases, is it correct?

Even with my additions, it depends on the purpose of the program. The use cases given may not accurately reflect the needs of that program.

Quote:Does there exist "perfect" solution, or are they all just a compromise?

There are never perfect solutions, but there are usually correct solutions. I believe that this is one of them, if we assume the use cases are a sufficient description of the needs of this program:

#include <iostream>#include <fstream>int main(int argc, char**argv) {	switch( argc ) {	case 1:		std::cerr << "Usage: " << argv[0] << " <filename>\n";		return 1;	case 2:	{		std::ifstream f(argv[1]);		if (!f.is_open()) {			std::cerr << "Error: Could not open " << argv[1] << " for reading.\n";			return 2;		}		for ( int i; (f>>i); ) {			std::cout << i;		}		std::cout << "\n";		if ( f.fail() ) {			std::cerr << "Error: Could not read number -- your input file should only contain numbers.\n";			return 3;		}		break;	}	default:		std::cerr << "Error: " << argv[0] << " only accepts one argument.  Run " << argv[0] << " without arguments for usage.\n";		return 4;	}}
Quote:Original post by MaulingMonkey
There are never perfect solutions, but there are usually correct solutions. I believe that this is one of them, if we assume the use cases are a sufficient description of the needs of this program:


Use cases typically define the needs of program. If there's a feature someone will not use, that feature should not exist.

But, since the application is working with files
- What happens if filename is unusually long?
- What happens if input.txt is larger than 4 gigabytes?

Both of these are very real problems, yet ultimately defined by underlying OS and standard library implementation.

Should those be test cases as well? Use cases?

Problems with the above are surprisingly common in relation to file processing tools, applications that pass lots of arguments in command-line, even compilers.

Yet they are never advertised as potential problems, and rarely are steps taken to handle them, until some user discovers it.

Does anyone presently check for those conditions and handles them?
Does anyone intend to start checking for those problems?

From user perspective, file being larger than 4Gb is not a problem, yet many applications fail when encountering one. Same goes for command-line. Yet either of those will make a perfect application useless.

Quote:I could start by not using C++, whose pre-processor is evil and cannot be avoided. I could also use a programming language whose specification tells me what the basic types are able to contain.


Googling for terms like: "command-line parameter filename length limit problem" brings up such issues on all platforms, in all languages in all types of applications, from C to VB.

Commonly encountered example. Same failure occurs under many circumstances when filenames contain non-ASCII characters.
Quote:Original post by Antheus
Quote:Original post by Rockoon1

If you feel the need to write-off memory leaks as "not a problem" then we have languages designed with people like you in mind.


What about OS resource leaks? Those in drivers, for example.


What about them? They are irrelevant to the disussion. The existance of cases outside of your domain of control that do leak is no excuse for cases inside your domain of control leaking.

There is no acceptable excuse.

Reach all day long trying to come up with one if you want, but the problem really is that most programmers are not capable of correctly managing memory in complex projects.

The cases where drivers leak memory, for example, is a fine example of why even otherwise competent programmers usualy shouldn't be managing memory.

Quote:Original post by Antheus
Can you, with 100.0% certainty claim that none of your code has even a single resource leak? That it is impossible for it to leak resources?


Heres the suprising answer for you..

No, I can't! Thats the point!

Even if I am very carefull about overall design I might make a mistake. Now, what of all these people that arent even skilled enough to know what a carefull design means? How about those pressured with an unrealistic deadline?

Thats the industry right there. People who aren't skilled enough are also saddled with unrealistic deadlines. A language like C++ is damn near the worst case.
Quote:Original post by johdex
I could also use a programming language whose specification tells me what the basic types are able to contain.


I don't know whether C++0x will implement this, but C99 has standard types that call for specific widths:

Quote:7.18.1.1 Exact-width integer types

1. The typedef name intN_t designates a signed integer type with width N. Thus, int8_t denotes a signed integer type with a width of exactly 8 bits.

2. The typedef name uintN_t designates an unsigned integer type with width N. Thus, uint24_t denotes an unsigned integer type with a width of exactly 24 bits.

3. These types are optional. However, if an implementation provides integer types with widths of 8, 16, 32, or 64 bits, it shall define the corresponding typedef names.

Quote:Original post by Mike.Popoloski
The idea that JIT languages are intrinsically slower than compilation to native code is rather ridiculous. The JIT compiler can make optimizations at run time, something of which the C++ compiler can only dream. C++ compilers are nearing their peak in terms of optimization capabilities, whereas JIT compilers are just coming into their own and still have tons of untapped and unexplored avenues available.


JIT compilers are almost always behind the ball, they have to sacrifice expensive optimizations in favour of reasonable compilation time and memory usage in a runtime context. I would say that the class of (currently known) optimizations available exclusively to a JIT compiler are small and have a much smaller impact than the class of optimizations that are (currently) impractical/inefficient to run on a JIT compiler/VM, and that is really the crux of the matter.

Quote:Original post by Promit
Quote:Original post by yacwroy
Quote:The most efficient JIT theoretically is slower than the most efficient fully-precompiled code.
I should have qualified this by saying that if the important loops in your code are larger than the code cache this may not be true. But with modern caches in PCs it should be true, as I'm pretty sure that cache is growing faster than the size of the important code that's taking up 90% of CPU cycles. Maybe you'll miss L1 a few more times, but I'd think JIT'd have problems staying in L1 also.
That was completely nonsensical. Try again. And remember that the basic mode of operation for a JIT is to translate the target function to machine code, and then patch stub calls to the JIT to direct call the JITted code. It's a one time occurrence that removes the JIT engine from the picture after it's done. (Yes, re-optimization based on observed performance is conceivable. No, we don't actually have the technology yet.)


Good JIT compilers will do exactly that because optimizing code at the highest opt right away is usually a bad strategy. It can introduce huge lag spikes and take up a lot of resources, and after you're done the code you generate may go cold. The better strategy is to compile hot functions at low opt first, then gradually recompile them at higher opt if they're still hot. The smart JITs will even keep some of the intermediate data generated in previous passes (if it's small enough) so that they don't have to regenerate it.

This topic is closed to new replies.

Advertisement