• Announcements

Archived

This topic is now archived and is closed to further replies.

malloc

Recommended Posts

Hi! I''m new user on Visual C++ and OpenGL, and I''m confused... The C++ compilers for DOS, have a function called ''malloc''. void malloc(size_t size); where size_t is an unsigned INT, what seems it have values between 0 and 65535 (I think..). Right... But, I have acessed some sites (like Nehe Tutorials, etc), and I''ve noted that people are using malloc with huge values ... Like: var = malloc(700 * 700 * 10); ??? I don''t undestand this... Is it correct? In Nehe Tutorials, I also have noted this.... In Lesson 6 (Linux/GLX Version), Mihael Vrbanec used malloc to read Bitmap-Files too... Ok, what I would like to know, is If I can use Malloc to get high values! Because, if we have, for example, a Bitmap with 2MegaBytes, in my mind, Malloc will fail... Thanks everybody!

Share on other sites
INTs have 32bit => 2^32 = a lot more than 65536 values.

Perhaps the good old 16bit-DOS-mode compilers used 16bit Integers.
A 32bit system allows allocating of more than 65K.

Share on other sites
Thank you a lot cds_560! I enjoyed your submit! :-)

I remembered now! The DOS-Compilers used to work with 16-bit Interruptions..., and 2^16 = 65535 bytes

Windows work with 32 bit = 2^32 = 4294967296 bytes.

But, There''s one think i Haven''t understand yet...

Why, in the MSDN Help, it says malloc() has a Integer Parameter and not a value with 32-bit of size?

Share on other sites
It''s because INT''s size is 32Bit. 16Bit would be SHORT.

My old turbopascal5.0 compiler used 16Bit variables that were called integer.

In fact the term ''integer'' only specifies the type of the variable.
CHAR, SHORT, INT and LONG are integers.

I think ANSI C++ (MSVC++ is not exact ANSI C++, but close enough) defines INT as a 32Bit integertype variable.

Share on other sites
Wait...I know ''INT'' is 32 bit, and I know ''LONG'' is 32 bit, but I am pretty sure that ''int'' is expanded to ''short int'' unless specified as ''long'' or ''long int'' and short int is only a 16 bit variable. 8 bit vars come in form of ''char'' i think, and the compiler manages 64 bit variables under the name __int64 or INT64 instead of those nasty unions.

-----------------------------
The sad thing about artificial intelligence is that it lacks artifice and therefore intelligence.

Share on other sites
Int expands to the easiest unit for the processor to handle, for maximum performance. So, on a 32bit platform, int is 32bit (same as long). On a 16bit platform, int is 16bit. This was great fun converting DOS based stuff to Windows... Especially if there''s ints (read: shorts) stored in files and a program tries to read ints (read: longs) from those files...

Share on other sites
Of course, my way of solving the problem would be to supply malloc with a huge number and see if the computer locks up or not.

Share on other sites
Heh, I think this is easier:

  if (sizeof (int) == 2) //We have a 16bit compilerelse if (sizeof (int) == 4) //We have a 32bit compilerelse PANIC!!!!!!

Edited by - Kippesoep on January 4, 2002 6:23:25 AM

• Forum Statistics

• Total Topics
627700
• Total Posts
2978699

• 21
• 14
• 12
• 10
• 12