Which of these two options is best?

Started by
8 comments, last by ultramailman 11 years, 3 months ago

I am using SDL alongside C. Before, I was doing this:


dest.x-=campos.x;
dest.y-=campos.y;
SDL_BlitSurface(surface,NULL,screen,&dest);

I wrote a function and now use this instead of the above code:


draw(NULL,dest,surface);

In both instances, NULL is the texture clipping. Here is the function:


void draw(SDL_Rect clip, SDL_Rect ofs, SDL_Surface *s) {
    ofs.x-=campos.x;
    ofs.y-=campos.y;
    SDL_BlitSurface(s,&clip,screen,&ofs);
}

Drawing graphics this way has saved me 0x109C bytes (4.1 KB) so far. I know this is more effective code-wise, but which method is best in terms of processing? I want to make the system requirements be as low as possible. Even if one method is as much as 1% more effective, please share. If you have numbers, statistics, or anything like that to back up your statement, please supply those. Is there a way I can improve the code I have shared? Something that bugs me is that I have to pass dest instead of &dest, as well as the whole surface, into the function instead of drawing directly.

Thank you! I look forward to your responses! smile.png

Advertisement

Unless you have an actual reason for doing so (i.e. compiling for cell phones. Not smart phones. Cell phones), such optimization of space usage and processing power is extremely wasteful of a much more important resource: your lifespan, and the amount of your lifespan that it costs to complete your project. smile.png

I'd use the draw function, because of how much easier it is to develop with. Though, I'd past the SDL_Rects by const reference, and instead of making the optional parameter first, I'd make it last, and 'overload' it. (Well, 'C' doesn't have overloads, but I'd write a second function with a different name). Further, I wouldn't have 'screen' be a global variable - not just because globals should be avoided, but also because you really might want to draw to non-screen surfaces. I'd also give the functions and parameters more descriptive names.


void drawSurfaceClipped(SDL_Surface *source, const SDL_Rect &clipRect, int x, int y, SDL_Surface *destination)
{
     SDL_Rect screenOffset;
     screenOffset.x = x;
     screenOffset.y = y;
     
     if(clipRect.w == 0 || clipRect.h == 0)
     {
          SDL_BlitSurface(source, NULL, destination, &screenOffset);
     }
     else
     {
          SDL_BlitSurface(source, &clipRect, destination, &screenOffset);
     }
}
 
void drawSurface(SDL_Surface *source, int x, int y, SDL_Surface *destination)
{
    SDL_Rect emptyClipRect;
    emptyClipRect.w = 0;
    emptyClipRect.h = 0;
    drawSurfaceClipped(source, emptyClipRect, x, y, destination)
}


It is far superior in every circumstance to have your code clean than to have it fast. Don't listen to the legions of poor programmers who try to convince you otherwise, no matter how pretty their tech demos are. laugh.png

Clean code can be made fast easier than fast code can be made clean.
Clean code can be optimized for space easier than fast code can be.
Clean code decreases development time, which can be re-invested to make a better product.
Clean code reveals bugs easier - bugs that fast code often disguises.

Clean code is quality code.


Fast code is like fast cars: only for bragging rights, of no real practical value, crashes more frequently, and costs you alot more to repair. But it sure does give people the illusion that they impress other people who also like fast cars. dry.png I'll stick with my used Honda, it can still reach 70mph, and I don't feel like spending an extra $10k just to get it to 180mph if it doesn't benefit me at all to drive that fast. (For the record, I'd feel the same way about having a 8 core 16gb RAM computer, when my 2 core, 2 GB RAM computer satisfies all my needs). Why waste one resource (money or lifespan) to purchase another resource (mph or executable size) just to waste the resource you just purchased? If for educational purposes (or simply your hobby), great! If for real projects, then you are making a poor trade that is vastly impractical, despite how astoundingly popular such poor trades are.

Other people's opinions may vary. wink.png

Sorry, it has saved you 4.1 kb of what?

Ultramailman: 4.1 KB of space when the project is compiled.

Servant of the Lord: You have taught me a few things in your post! Thanks! I see exactly what you are saying, but the lines of code I am using are a trivial matter and will never need to be changed. I had space and system requirement optimization in mind, but 4.1 KB isn't a whole lot of space, so if the new code requires more out of the system, it doesn't seem like a fair trade. I would like a way to test it and see which is more processor-efficient. When it comes to major pieces of code such as making something follow a path, using a function is far more necessary for organization, debugging, and so on, but when it all boils down to choosing between 1.) typing one line instead of three and 2.) having code that isn't as harsh on the processor, I feel it is necessary to go with the latter, especially when the code already works properly and does not need any improvement. Are you able to tell me which method I shared is more processor-effective? Thank you for your descriptive response, and a Reputation++; for you!

Neither is more efficient. A good compiler will optimize them to roughly equivalent code with a function that small.

But you can't tell for sure: one may be more efficient on one system, and the other more efficient on the other system. Instead, we let the compiler optimize our code for us as the very lowest levels, because the compilers are custom tailored for whatever operating system and CPU hardware (x86, x64, ARM, PowerPC, etc...) they are running on.

Any chances we make might only save us a spec of processing power here, or a spec of processing power there. Instead, optimization is very important at higher levels. If a function is called 10,000 times a frame and takes 2 microseconds to call each time, you might be able to trim it down to 1.75 microseconds for a modest speed gain (2,500 microseconds). However, if you could instead optimize at a higher level so you don't need to call the function 10,000 times, and instead call it only 8,000 times (saving 4,000 microseconds).

But realize this: Your users won't care how fast it goes if you are optimizing the parts that already were running fast enough. Instead, you need to optimize just the parts of your code that happen to run when alot of other parts of the code is also running that together cause the program to lag.

Once your program is close to completion, you use what's called a 'Profiler' on your code, and your profiler will measure the performance of different parts of your code, and tell you where your program is spending the most time. Then, you can go to bat on that specific piece of code, and optimize it until it runs fast enough to please your users.

Honestly, even if something is running 10,000 times a frame, the CPU is fast enough to handle it. When I was optimizing a piece of my code that was lagging badly (it was doing a huge amount of work every so often, but not every frame), it was being called over a quarter of a million times (>250,000). The minor optimizations I was making were helping a little (because even 1 microsecond adds up when multiplied by 250,000). Instead, the real optimizations came when I figured out I could skip the 250,000 function calls entirely. Even more speed increases came when I started pre-optimizing the data that the code was working on, so the code itself will have to do less work.

The best optimizations come from figuring out how you can avoid having to run any code at all, not from figuring out how to get existing code to run faster. Leastways, that's what my own personal limited experience has taught me.

Aye, some guru once said something like, "The fastest and most error free code is that code which is never written."

The idea being that reducing work by having good structure is very nearly always better than optimizing the individual routines. In production code it's almost a sin to try and low-level optimize something that a profiler hasn't singled out and can't be refactored out. If you're getting paid to code something and you spend 5 hours on an optimization that gets factored out the next day you just wasted 5 hours of paid work. Sometimes I'll get 'the bug' (in my brain) and optimize something until it's ridiculous, but I always do so knowing that it's a pointless exercise, and I usually think up several ways to factor the code out even while I'm in the midst of my madness.

I posted something a while ago about an xor encryptor that I was playing with (it was making my hard disk stall and make a weird grinding noise). I optimized it by actually compiling bytecode for the encryption at runtime and then executing it, which is ridiculously fast, but the fact is that the choke-point in that app is the disk IO. I knew from the start that if I were to run the old (non-optimized) code while the app was async reading into the next buffer it would completely mask the encryption time, resulting in better performance without the tomfoolery. I just wanted to play with assembly for a while.
void hurrrrrrrr() {__asm sub [ebp+4],5;}

There are ten kinds of people in this world: those who understand binary and those who don't.

Don't worry about picky little details for now. Develop your game, and then if you have performance problems fix them then. Unless you're running a 8080, you really have no need to worry about issues like this.

Stay gold, Pony Boy.

More Reputation++; for you guys! I totally forgot about the compiler interpreting it and making it lower level either way. I am actually getting better results in my project by doing it the way I was before (without a function), so I will stick with that. Code can almost always be improved, so as long as it runs and there are not any memory leaks or anything, it is good enough for me, and definitely good enough for those who run it. Thank you very much for your descriptive responses and good luck with your own projects as well! :D

Turns out you write programs for other people to read, not for compilers to read. Oh, sure, most people write write-only code and rather than maintaining it or fixing the bugs, they just ignore the bugs or throw their work away and start again. Then, there are productive successful projects.

The most important thing is to make your code understandable. Clarity of purpose. Clarity of meaning. A function or class should have a single, clearly defined purpose and it should be named to describe that purpose. 70% to 80% of the cost of software is maintenance, and new code becomes old code in maintenance the moment after it's written. You're likely the next programmer who has to understand what you wrote, so focus on clarity rather than premature picooptimization.

First make it work, then make it work fast. You won't get to the second step if you can't understand what you wrote.

Stephen M. Webb
Professional Free Software Developer

I see. I think if you will only use that snippet once, then it's fine to just use the three lines of code directly and not write a function for it. But even if you do make a function for those three lines, compilers can notice that it is used once and inline it to the same thing.

If you use that snippet many times in many different places, then it is better to write a function for it, because it is better to repeat less code. If you absolutely must have the performance benefit of not using a function, you can consider making the function "static inline". This will allow your compiler to decide whether to inline it or not. If static inline is not possible with your compiler, you can define a function-like macro. That way you will have good looking and (possibly) faster code. Also note that inlining doesn't always make the code faster.

This topic is closed to new replies.

Advertisement