• Create Account

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

17 replies to this topic

### #1run_g  Members

212
Like
0Likes
Like

Posted 14 September 2011 - 05:36 AM

I heard from credible sources that C language's printf(...) funtion has a/some big disadvantage(s) particularly in real-time coding. I wasn't in a position to ask the sources directly so I embarked on a research to find out what it is. As much as i tried I couldn't find anyone discusing this on the net.
Deos anyone know what this disadvantages are?
XX

### #2Tachikoma  Members

575
Like
1Likes
Like

Posted 14 September 2011 - 05:46 AM

printf needs to scan the format string and construct the output dynamically. Not only that, but a variadic argument list needs evaluation, too.

However, speed is not your biggest concern. Potential buffer overflows is the real danger, particularly bad (or mismatched) format specifiers. The latter is generally known as "uncontrolled format strings", which can be exploited in creative ways if you are not careful.
Latest project: Sideways Racing on the iPad

### #3rip-off  Moderators

10730
Like
1Likes
Like

Posted 14 September 2011 - 05:48 AM

What is your definition of "real time coding"? What alternatives are you considering? Who is this credible source, who makes vague assertions without the detail to back them up?

printf's main disadvantages, IMO, are the burden it places on the programmer to ensure the format string matches the set of parameters, and the fact that you are dealing with a C string API - all the usual security/bug warnings apply.

### #4Hodgman  Moderators

49387
Like
0Likes
Like

Posted 14 September 2011 - 06:08 AM

What kind of program needs to output text in "real time"?

### #5Infernal-rk  Members

135
Like
0Likes
Like

Posted 14 September 2011 - 01:17 PM

the matrix screen saver......

Seriously though
if you mean that using printf or some self implemented variant to render debug text to a console in a game engine is a bad idea. Yes and no. warning messages detailing a function failing.. good. Rendering information using printf style eplisis (...) set of arguments per frame. your performance will drop miserably.

### #6ApochPiQ  Moderators

21387
Like
2Likes
Like

Posted 14 September 2011 - 03:27 PM

Rendering information using printf style eplisis (...) set of arguments per frame. your performance will drop miserably.

Unfounded superstition, at best.

Go ahead. Write a program that calls sprintf() in a tight loop. Benchmark it. Let me know how "miserably" it performs.

For objectivity, compare to std::stringstream and other solutions for formatted output. Definitely include boost's formatted output library, because that one is dog slow.

But please don't spread this kind of rumor without any evidence or backup.

(For the record, I know you are wrong because I've seen variadic functions used for debug logging, framerate counters, and all manner of other per-frame operations and unless you do something idiotic like try to write an ASCII rasterizer using a printf() call per character, they are not the killer of performance you make them out to be.)

Wielder of the Sacred Wands

### #7VReality  Members

436
Like
0Likes
Like

Posted 14 September 2011 - 03:30 PM

I would think that the performance issues would be related to whatever mechanism is used by the system to actually output the resulting string.

I've heard of a situation on a game console in which printf was directed to the debug window which was on a remote machine, and something about printing a lot of information over the network caused a problem.

I'm not sure what the exact issue was, but the point is that printf() is an output function, which carries some sort of output overhead, and is rarely ever appropriate for use by a real-time system.

### #8ApochPiQ  Moderators

21387
Like
1Likes
Like

Posted 14 September 2011 - 05:43 PM

To be excessively pedantic, and expound on VReality's point a bit: the problem is doing something silly like spamming 2MB/s of logs across the network (or to disk, or the console, or anything else). It doesn't matter if you use printf() or cout or foo for any value of foo. What really matters is the nature of the activity, more so than the function being used.

You'll note I talked about sprintf() and not printf(). printf() is designed to write to stdout, which might be the console, or piped to another process, or routed across a network into another country, or whatever. My point is that variadic functions are not inherently performance problems when considered in the context of, say, rendering text. The cost of drawing the actual text to the screen is going to be orders of magnitude higher than the cost of calling sprintf(). And the same follows for printf() itself, or fprintf(), or anything else: the expense is the nature of the operation, not the variable number of arguments.
Wielder of the Sacred Wands

### #9Infernal-rk  Members

135
Like
0Likes
Like

Posted 15 September 2011 - 12:09 AM

I was ouputting from many points in my physics calcs per frame and my fps chunked down hard. the enitre game slowed considerably. it may have also been extra memory allocations occurring to field the ever expanding list of string forming console history.

### #10Tachikoma  Members

575
Like
0Likes
Like

Posted 15 September 2011 - 03:06 AM

I was heavily using my own custom variadic printf function to render debug info in GL on iOS. I had no problem on that front. In fact the bottleneck was actually rendering the glyphs in GL as opposed to constructing the string in the custom printf. And that was on a feeble ARM processor.
Latest project: Sideways Racing on the iPad

### #11SimonForsman  Members

7584
Like
0Likes
Like

Posted 15 September 2011 - 05:07 AM

I was ouputting from many points in my physics calcs per frame and my fps chunked down hard. the enitre game slowed considerably. it may have also been extra memory allocations occurring to field the ever expanding list of string forming console history.

If the buffer gets filled printf and cout << will block until everything is printed, printing to a console window or file might require a context switch(someone with more knowledge should know this i guess) (which is fairly expensive),

With C++ you can obtain the streambuf object for your stream by using the rdbuf method and then replace the buffer with a larger one by using the pubsetbuf method on the streambuf object.
With C you can use setvbuf to change buffering mode on stdout to full buffering (standard is usually line buffering which means it pushes it out once a newline occurs) and setbuf to change the buffer to a larger one if needed.

Also, remember that in C++ std:endl flushes the buffer aswell, use \n instead to avoid it if you want to buffer more than one line.
I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!

### #12wodinoneeye  Members

1669
Like
0Likes
Like

Posted 17 September 2011 - 06:06 AM

I worked for a company in the early Windows days (windows 1 ) that was actually a DOS program (with Tesseract for thread switching)
and they didnt want any printf in the program as it WAS a performance driven real time system (multiple phone audio app) and
we used strcpy() strcat() and itoa() in its place. Just had to do the equivalent of what the printf did. If I had been more experienced at
that time I probably would have made C macros to make it easier.

Interpretation always adds more CPU work even as simple as printf's interpretor is
--------------------------------------------Ratings are Opinion, not Fact

### #13ApochPiQ  Moderators

21387
Like
0Likes
Like

Posted 17 September 2011 - 03:14 PM

Did you have some reason for not just using sprintf()? I have a hard time believing your hand-rolled solution based on strcat() of all things could outperform a decent library implementation of sprintf().
Wielder of the Sacred Wands

### #14rip-off  Moderators

10730
Like
0Likes
Like

Posted 19 September 2011 - 02:41 AM

I think he meant replacing:
printf("Hello %s stuff %d.\n", someString, someNumber);

With something like (forgive my C, it may be rusty):
char buffer[N] = "Hello ";
char *pointer = buffer;
char intBuf[M];

pointer = strcat(pointer, someString);
pointer = strcat(pointer, " stuff ");
pointer = strcat(pointer, intBuf);
strcat(pointer, ".\n");

// Output "buffer"

Essentially, it sounds like the printf formatting logic was inlined into every call site.

Interpretation always adds more CPU work even as simple as printf's interpretor is

Doing something is always slower than doing nothing, certainly. I could see how such a level of optimisation would be necessary for a real time system like the phone system described, but it is unlikely to be the bottleneck, or even particularly problematic, in the OP's case.

It must be noted that hard-coding these things does not allow an application to have the string format externalised (and then internationalised/localised), which can end up being a bigger concern than performance for many commercial applications.

### #15ApochPiQ  Moderators

21387
Like
0Likes
Like

Posted 19 September 2011 - 03:13 AM

Either way, a decent (even a naive) implementation of the printf() family (including sprintf) would handily beat that. You have 4 calls to strcat() alone, which means walking the output string 4 times to find its length, strip the null terminator, then copy the new data to the end of the string; even a good strcat() implementation is going to struggle to beat a single pass through the input format specifier and a single pass at constructing the output buffer.

There are ways to make this faster, such as not using strcat() or itoa() and basically reimplementing sprintf() at every call site as you allude to, but that's just begging for binary code bloat and is likely not going to be any faster than a decent function call.

This kind of thing is why "optimizing" code without profiling evidence to prove that you're doing the right thing is generally very, very unwise.
Wielder of the Sacred Wands

### #16rip-off  Moderators

10730
Like
0Likes
Like

Posted 19 September 2011 - 03:28 AM

Either way, a decent (even a naive) implementation of the printf() family (including sprintf) would handily beat that. You have 4 calls to strcat() alone, which means walking the output string 4 times to find its length, strip the null terminator, then copy the new data to the end of the string; even a good strcat() implementation is going to struggle to beat a single pass through the input format specifier and a single pass at constructing the output buffer.

Whoops. I was trying to do as described here, but I forgot that regular strcat() does the retarded thing (I warned you my C was rusty!).

### #17wodinoneeye  Members

1669
Like
0Likes
Like

Posted 21 September 2011 - 06:18 AM

Was fairly simple error/log string building wih static buffers and very few string variable appending to other strings (thus fixed offsets in the string buffers could be hard coded)

char strx[100];
strcpy(strx, "the error number is ");
itoa( varx, &strx[20], 10); // will end with its own null termiantor...

strx sent to output call for that buffer to file or whereever (often an internal debug msg stack)

so not all that many sub calls
and it skips the interpretation of that base string char by char done by printf or sprintf

its so long ago I forget if I did memcpy instead of strcpy
--------------------------------------------Ratings are Opinion, not Fact

### #18Antheus  Members

2409
Like
0Likes
Like

Posted 21 September 2011 - 09:31 AM

A while back I was dealing with pushing throughput of text output as far as I could for a text processing app.

In rough order fwrite < printf < iostream::write < cout.

In all cases, buffer size was crucial. Depending on output target (disk/memory), the overall cost of performing IO was orders of magnitude more expensive than any formatting. I ended up with buffers going up to 64 megabytes that would be dumped to output in single IO. The buffers were written to using sprintf. cout/basic_stream was always inferior due to occasional heap allocation of passed arguments.

For comparison, IO devices are limited by IO operations per second, even SSDs. Highest end SSDs reach some 100k IOPS. Rotary drives reach 10k tops. Console tends to have huge overhead as well.

For comparison, sprintf has no problem formatting data comparable to memory bandwidth, so gigabyte per second is perfectly fine.

tl;dr; printf or cout doesn't matter, buffer size limit throughput, might be necessary to buffer several megabytes before reaching limits on desktop systems.

I also remember once dabbling in a templated printf that would resolve format and generate layout at compile time. In assembly, things ended looking like this:
fancy_cout << 1234;
// in assembly
buffer[0] = 1;
buffer[1] = 2;
buffer[2] = 3;
buffer[3] = 4;
Don't remember where things eventually broke down and why. Maybe it even worked I just decided it was too much magic going on. Either way, it was never the bottleneck.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.