Jump to content
  • Advertisement
Sign in to follow this  
Tryonic Prinv

why use scalar_t ?

This topic is 5457 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Probably an easy question for the more experienced people out there... Why do people define a type called scalar_t as a float, and then use that throughout the program instead of float? My only guess is that it would afford you the ability to change the type later on, with only a single line of code, but why would you want to do that? To me it's pretty obvious that this thing should be a float. Thanks.

Share this post


Link to post
Share on other sites
Advertisement
Guest Anonymous Poster
Your basic explanation is fine - although you've not realised all the consequences. It might be a float - but it might be promoted to a double. The size of float is also compiler dependent, and if you change compiler you might want to use another construct instead.

Hence the typedef - ing.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
The size of float is also compiler dependent

That's correct, but in reality it's the same everywhere.

Share this post


Link to post
Share on other sites
The only reason it should be a float is because on a generic x86 floats are faster to deal with. When doubles become fast enough, the typedef can be changed and the extra bits can be taken advantage of. It's also important to realise that not every piece of hardware will have a 32bit type called float, so if you rely on 32bits you'll have problems. The typedef fixes that by allowing you to specify a 32bit floating point type regardless of what the underlying hardware expects.

The problem with assuming it's useless is that the use you came up with carries a lot more weight than you're giving it credit for.

Share this post


Link to post
Share on other sites
Quote:
Original post by Anonymous Poster
Quote:
The size of float is also compiler dependent

That's correct, but in reality it's the same everywhere.


Can you prove this? Or are you just assuming that from now until eternity this is how things will be.

Share this post


Link to post
Share on other sites
They might even want to use something else entirely some day, like GMP. You never know, one day you might wake up and go, "You know, I really wish I had thirty million bits of percision for my floating point numbers".

Share this post


Link to post
Share on other sites
Quote:
Original post by Anonymous Poster
Quote:
The size of float is also compiler dependent

That's correct, but in reality it's the same everywhere.
No it isn't. In particular, since all sizes are relative to the size of a char (sizeof(char)==1 everywhere, even architectures where a char is more than 8 bits), floats and doubles can often appear _smaller_ than 4 and 8 respectively.

Share this post


Link to post
Share on other sites
typedefs like this are CRITICAL in writting a program that can be ported to other platforms (as well as into the future) easily.

And it comes into play other places as well. For instance, if your game can use 16 or 32 bit color depths, don't you guess you'll need a typedef somewhere like "Pixel" or something so when you allocate buffers, such as:

Pixel buffer[Height][Width];

it would be the right size for your pixels (obviously in this particular typedef example, you'd have to recompile to go from 16 to 32 bit pixels - UNLESS you are in a template - which often use typedefs a lot for this exact purpose.

Personally, I use the "Scalar" typedef as a float in my program, because I am just weird that way ... all of my Point2D and Point3D classes are templated and take the "Scalar" type as an argument anyway ... but I don't want to have to search through my program seeing stuff like this:

Point2D<float> point;
float total;
int index;

and not know if "total" should change to double if my scalar does ... the 2 points of using typedefs that I know are: to use them EVERYWHERE that should change TOGETHER, or to use them everywhere that is compatible without conversion

... for instance ... I do the former when I say:

typedef unsigned Pixel; // might be 16 bit bit field, 32 bit unsigned, 4 byte structure, 4 f16s, or 4 floats, or a custom class
typedef int Color; // might become Color class later (actually is now, but didn't use to be)
typedef float ScreenScalar; // almost always a float for me
typedef double GameScalar; // almost always a double, cause I need the precision to allow space scale values.

Share this post


Link to post
Share on other sites
Now I'm curious, would this also explain why people use things like GLuint, GLint, GLfloat, etc.?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!