Archived

This topic is now archived and is closed to further replies.

A few basic c\c++ questions

This topic is 6109 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, I''ve noticed I''ve gone a long way with a few holes in my knowledge, so I though I''d ask... 1) long vs int I might have been misled when learning. Until recently I -always- used unsigned long or signed long unless interacting with an API that required otherwise. I got this in part from Lamoths comment on always using 4 byte datatypes a year ago and I''ve never questioned this. So, I just ran a test, sizeof(int) and found it to be 4. So is there any difference between a long and int? When I first learnt c++ from Jessey Liberty''s 21days book I learnt that an int was 2 bytes and a long 4. Furthermore I recal reading ages ago that the compiler changes these to the most efficient form for the machine anyway... Anyone wish to comment? 2) Casting classes: I''m attempting to write a Multi API engine (or at least get it far enough along that I can add the Direc3D stuff later as a learning exercise. I tried to do this but was twarted: m_RenderingContext = (void*)Direct3DCreate8(D3D_SDK_VERSION); ...later (IDirect3D8*)m_RenderingContext->Release(); Direct3DCreate8 returns a IDirect3D8* but I cast it to something generic that will work for OpenGL and Direc3D, doesn''t work. Is casting class pointer different to casting intergral types? Many thanks cb

Share this post


Link to post
Share on other sites
int is platform dependant. It is usually the size of a word(or the size of an address for the cpu). BUT it could also be different(eg. : handeld devices)

I know for x86 computers and sparcs, int is 4 bytes.

Edited by - Gorg on March 21, 2001 11:05:30 PM

Share this post


Link to post
Share on other sites
quote:
Original post by gimp
2) Casting classes:
I''m attempting to write a Multi API engine (or at least get it far enough along that I can add the Direc3D stuff later as a learning exercise. I tried to do this but was twarted:

m_RenderingContext = (void*)Direct3DCreate8(D3D_SDK_VERSION);

...later

(IDirect3D8*)m_RenderingContext->Release();


Change that to this:
  
m_RenderingContext = (void *)Direct3DCreate8(D3D_SDK_VERSION);
// ...later

((IDirect3D8*) m_RenderingContext)->Release();


"Finger to spiritual emptiness underlying everything." -- How a C manual referred to a "pointer to void." --Things People Said
Resist Windows XP''s Invasive Production Activation Technology!
http://www.gdarchive.net/druidgames/

Share this post


Link to post
Share on other sites
1) You should always use int, unless you are dealing with an API that requires a different type, or you need to know the exact size of the type. On Win32, long == int. Nothing is guaranteed to be a certain size, in fact, char isn''t even guaranteed to be one byte. If you are aiming for portable code, use typedefs in key places.

For example, if you are writing something that is inherently low-level (such as a console emulator), it''s a good idea to place these in your project''s headers:


typedef int int32
typedef short int16
typedef signed char int8
typedef unsigned uint32
typedef unsigned short uint16
typedef unsigned char uint8

typedef byte int8;


...which you can then use when laying out low-level structs that depend on their data members being a certain size. Changing the typedefs in the header when compiling for a different platform is much easier than hunting down bugs. Note that char is unique among integer types in that it can default to either signed or unsigned, depending on the platform.

Again, this is only when you -need- to know the exact sizes. In most cases, such as almost all function parameters and many data members, you don''t need to know the exact size, only a good guess. int is going to be at least 2 bytes on most every platform you can find today, most likely it''ll be 4 bytes because most code is 32-bit.


2) I prefer:


static_cast<IDirect3D8*>(m_RenderingContext)->Release();



Of course, it is a bit of typing. However, having a base class store protected data members for derived classes tends to create dependencies and messy code - is there a reason why you can''t let the derived classes simply declare their own data types? If you really do need to, you might want to add a function to each derived class that handles the casting for you:


IDirect3D8* direct3d() { return static_cast<IDirect3D8*>(p); }



So it can use direct3d()->method(). This also tends to localize the dependencies of the derived class upon the protected base class data member to one place in the code, namely the direct3d() function.

Share this post


Link to post
Share on other sites
For the sizes all you can count on is that
sizeof(short) >= 16bits
sizeof(int) >= sizeof(short)
sizeof(long) >= sizeof(int)

int is the most convenent 2''s compliment integer.
I agree with null_pointer about using typedef''s where it matters
I use
typedef unsigned char byte 
simply because I
often want to shift bytes, and if it''s signed a left shift will
propogate the sign.


As far as casting objects, you should use dynamic_cast, rather that static_cast. dynamic_cast checks to see if what you have is
really a decendant of the base class, and returns 0 if you are incorrectly casting a pointer, or throws bad_cast if you are casting a referance. You shouldn''t have to cast at all however if you create a base class that has virtual methods for all it''s calls.

Get Bjarne Struoustrup''s The C++ Programming Language either a recent printing of the 3''rd edition or the special edition.

Share this post


Link to post
Share on other sites
Ranked in ascending order of desperation:

static_cast - your basic C-style implicit cast from void*, and most of the well-defined explicit conversions.

const_cast - scary, casts a const type to a non-const type, but it isn't guaranteed to work if you modify an object specified as const. Useful for passing const objects to old code and those few functions that never seem to use "const" even though they guarantee you that they won't modify the object. (DirectDraw and LPRECT, anyone?)

reinterpret_cast - scary, only copies bit patterns, no other rules. Useful for flagging non-portable conversions, directly accessing an object's memory *shudder*, and other nasty things you usually don't want smeared all over your project.

dynamic_cast - casts up, down, or across a class hierarchy. Casting up takes no overhead (casting up is also implicit, no real need for dynamic_cast there), but casting down or across uses a small run-time check to make sure the conversion is successful and points to a valid object. If it fails, it returns a null pointer. Useful for flagging design problems, and those very rare occasions when it's actually necessary. If you want it to throw an exception instead of returning a null pointer when the cast fails, cast using references instead of pointers.

Construction cast - not really a cast, it's actually just a constructor call that constructs an unnamed temporary object and gives it a value. Useful for controlling truncation and other problems in mathematical formulas, and explicit conversions of user-defined types by construction.

C-style cast - scary, not very descriptive, acts like either static_cast or reinterpret_cast, depending on the situation. Useful only because it allows some measure of compatibility with C code in a C++ compiler.


static_cast and reintpret_cast are basically two different levels of the C-style cast. The rest of the C++ cast operators *_cast are specialty casts that apply to certain rare situations.

Edited by - null_pointer on March 22, 2001 3:41:21 PM

Share this post


Link to post
Share on other sites
Gorg says:
"I know for x86 computers and sparcs, int is 4 bytes."

Nope, in 16-bit Windows "int" was 2 bytes, at least with Microsoft compilers.


Grib says:
"sizeof(short) >= 16bits"

Nope. There is no such statement in the C/C++ language spec. I vaguely remember using a C compiler on the Commodore 64 where a short was 1 byte.


Here are your guarantees:
sizeof(char) == 1
sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long)

That''s it. Note that it doesn''t say that sizeof(char) is 1 byte, it says simply that it is 1. In the vast majority of cases this will be 1 8-bit byte but it is not guaranteed.

Here''s some more stuff to screw you up:

sizeof(long)=4 in 32-bit and 64-bit Windows. sizeof(long)=8 in most 64-bit Unixes but not all.

sizeof()=sizeof(int) in 32-bit Windows and 32-bit Unix. Only sometimes in 16-bit Windows, and never in 64-bit Windows.

-Mike

Share this post


Link to post
Share on other sites
If I remember right int is the fastest variable type? I thought long and shorts were done using typecasting(int) by the compiler? Though I read this in a C++ book somewhere? I am probably wrong!

Windows SUCKS! Deal with it!
if(windows crashes)
{
run Linux
}
else
{
yea right!!
}

Resist Windows XP''s Invasive Product Activation Technology!

Share this post


Link to post
Share on other sites
Basically you can never truly count on the standards because no one ever sticks to them.
int is usually faster only because it is usually defined to the processors preferred data bus width, but again you can''t count on this 100%.
If you really want to be sure, and you are dependent, then check the size of the variables in the code (I think you can do this in a precompile step but I can''t remember how).

Share this post


Link to post
Share on other sites
There is way too much fear here, and a lot of bad information as well. I'm getting my information from "The C++ Programming Language" by Bjarne Stroustrop, so if you have any problems with this information, email the guy who created the language.

Statements that are known to be true for any standards-conforming compiler:

1 = sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long)
1 <= sizeof(bool) <= sizeof(long)
sizeof(char) <= sizeof(wchar_t) <= sizeof(long)
sizeof(float) <= sizeof(double) <= sizeof(long double)

sizeof(char) >= 8 bits
sizeof(short) >= 16 bits
sizeof(long) >= 32 bits

sizeof(T) == sizeof(signed T) == sizeof(unsigned T)

sizeof(char) is the base unit; all sizeof() results are multiples of the base unit. If you have a 12-bit char, the size of all types will be in multiples of 12 bits. This is rare.

An int is supposed to be the best way to manipulate integers on the target machine.

Enumerations vary in size, depending on how many unique values are represented and some obscure compiler optimizations (most likely depending on how you use the enumerated types in your program).


That's IT.


quote:
Original post by Anon Mike

Grib says:
"sizeof(short) >= 16bits"

Nope. There is no such statement in the C/C++ language spec. I vaguely remember using a C compiler on the Commodore 64 where a short was 1 byte.



Your statements might be true in C, but not in C++. They are different languages, and assuming they are equal can get you into trouble.

*edited to correct some stupid mistakes*

Edited by - null_pointer on March 22, 2001 5:33:40 PM

Share this post


Link to post
Share on other sites
Oh, gimp, I missed a critical part of your post - the explanation of why this seemingly "vague" standard exists!

Here's how it works. In the early 1980s, Mr. Stroustrop and a few other people developed this variant of a C compiler over the course of several years, for their own work. As they kept adding features and changing some syntax, it became known as "C with classes". More and more things were added, some stupid stuff from C was corrected, and somewhere along the way it became C++ and the C++ standard library was born.

Now, note that C++ is a language. A language is not a compiler; and a language is NOT determined by what a particular compiler or platform implements. A language is determined by its standard.

The standard sets forth certain guarantees that each compiler must implement. However, there are a variety of machine architectures out there, each with different optimal data sizes and formats.

For example, x86 CPUs in Win32 are faster with 1-byte and 4-byte data types than with 2-byte data types. Thus MS Visual C++ implements a bool and char as 1-byte data types, and an int as a 4-byte data type. What is important to remember is that in most cases this doesn't change the way in which we use these types. For most operations, it would be okay for the compiler to implement int as an 8-byte number on platforms that performed better with 8-byte numbers.


#include <iostream>

struct some_data_struct {
bool b1, b2;
int x, y, z;
};

int main()
{
some_data_struct d;

d.b1 = true;
d.b2 = false;

// always prints "d.b1 == true"
std::cout << "d.b1 == " << ((d.b1) ? "true" : "false") << std::endl;

d.x = 10; // always 10
d.y = x * 2; // always 20
d.z = y * 2 + x; // always 50
}



That code is portable. No matter which compiler I use, it will always run as I expected when I wrote it, because it conforms the language standard. That's the beauty of a good standard!

Standards, then, are a way of telling compilers what the programmers expect, and vice versa. When a standard hits upon just the "right amount" of vagueness, programmers and compiler-writers are free to do their own jobs with minimal interference from each other.

Writing truly portable code is an art; some of the expectations or assumptions that programmers place upon their code are very hard to see if you aren't very experience with porting.

However, most expectations are quite obvious and in fact most C++ code is very portable. It's that last 5% that is filled with implicit casts, possible truncations, etc. that make for nightmares. Most developers don't need to port code; however it's good to learn something about it.

So it's generally best to program according to the C++ standard, and then adjust the code if need be to the compiler. Most people who would disagree with me on this don't know that they do it all the time; they are simply so familiar with their particular compiler or machine architecture that they don't realize it.


I second Grib's recommendation: get that book if you intend to program in C++!

Gotta go, sorry for any mistakes!

Edited by - null_pointer on March 22, 2001 6:49:41 PM

Share this post


Link to post
Share on other sites