Sign in to follow this  
Plethora

Curiosity about compiler warnings

Recommended Posts

So I've read before that its general good practice to take care of compiler warnings whenever possible and that makes sense to me and I do so as much as I can, but I run into some of them every so often that don't really make sense to me.  Such as:

 

conversion from 'int' to 'float', possible loss of data

 

Firstly, how would I lose data if I'm going from int to float?  Obviously if I was going from float to int I would potentially lose data and that wouldn't be good generally speaking, but from int to float? 

 

Secondly, the line of code referred to in my current program is taking the return value of one function (not even my function, from a library I'm using) and applying it as a parameter in another function (also not one of mine).  Would it be considered good practice to explicitly re-cast the int as a float and then use it as a parameter in this case?

Share this post


Link to post
Share on other sites

In general, is it considered good programming practice to eliminate warning generating code at all times even when you're pretty damned sure there wouldn't ever be a problem with it?

Share this post


Link to post
Share on other sites

Re: floats, see http://www.altdevblogaday.com/2012/02/05/dont-store-that-in-a-float/ - the section headed "Tables" about two-fifths down the page summarises floating point precision losses at various ranges.  So long as the precision loss is sufficiently low (i.e under 1) it's probably not a problem (but be absolutely certain that your data is going to fall in this safe range!) but what's interesting is that there are ranges where the precision loss becomes higher than 1, and that these ranges are well within the capacity of a 32-bit signed int.  So, for values over about 10,000,000 or so, you will lose data by converting from int to float.

 

Re: warnings, all that they are is the compiler giving you a heads-up that the code you're using is potentially suspicious.  They're neither exhaustive nor prescriptive, but despite that I personally consider it good practice to compile with a reasonably high warning level (2 or 3; going all the way to 4 may be too much as it may throw warnings for otherwise perfectly fine code, such as "while (1)" for infinite loops, for example) and always compile with warnings as errors.  Do this early enough in the development process and it gets you into the habit of writing good clean code from the outset.

Share this post


Link to post
Share on other sites

Well, the line of code in question is referring the the X'th pixel of a 32x32 image, so I think in this case I'm safe since if it falls in the range where there could be a problem... well there would be far more serious issues with my code if that happens, lol.

 

But point taken on the general case, thanks for the answers all.  :)

Share this post


Link to post
Share on other sites

So I've read before that its general good practice to take care of compiler warnings whenever possible

On a few of the commercial projects I've worked on, they've enabled the highest level of warnings, and also enabled "treat warnings as errors" so that your code won't build if there's any errors.
They've also then had a system where you cannot commit your code into the central repository if it doesn't build, which forces people to always fix their warnings immediately wink.png

Share this post


Link to post
Share on other sites

In general, is it considered good programming practice to eliminate warning generating code at all times even when you're pretty damned sure there wouldn't ever be a problem with it?

 

The reasons warnings exist is to let you know that you've done something potentially problematic. The compiler doesn't know if you did it intentionally or not. The warning gives you the opportunity to clarify. If it *is* intentional, then you can add a cast or whatever else you need to do so that the compiler, and other programmers, can understand it's what you wanted to do. If it was a mistake, you can change the offending code to something else. Fixing warnings is considered good practice because it makes your code easier to reason about. But you shouldn't do so blindly. You have to always be aware of the parameters surrounding the offending code. Things like truncation, signed/unsigned comparisons, and so on can come back and bite you later on down the road even if you are using it intentionally, especially as the code base evolves and lines are added and removed.

Share this post


Link to post
Share on other sites

A long time ago as I was writing a graph drawing class and wanted some method to test it, I ended up writing a method to plot precision ranges for floating point numbers. Here are the linear and logarithmic displays of 32, 64 and 80 bit floating point numbers graphed exponent vs minimum precision (step). I find them useful to look at every now and then.

 

[attachment=14499:fpPrecisionLin.jpg]

[attachment=14500:fpPrecisionLog.jpg]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this