Archived

This topic is now archived and is closed to further replies.

malloc

This topic is 5820 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi! I''m new user on Visual C++ and OpenGL, and I''m confused... The C++ compilers for DOS, have a function called ''malloc''. void malloc(size_t size); where size_t is an unsigned INT, what seems it have values between 0 and 65535 (I think..). Right... But, I have acessed some sites (like Nehe Tutorials, etc), and I''ve noted that people are using malloc with huge values ... Like: var = malloc(700 * 700 * 10); ??? I don''t undestand this... Is it correct? In Nehe Tutorials, I also have noted this.... In Lesson 6 (Linux/GLX Version), Mihael Vrbanec used malloc to read Bitmap-Files too... Ok, what I would like to know, is If I can use Malloc to get high values! Because, if we have, for example, a Bitmap with 2MegaBytes, in my mind, Malloc will fail... Thanks everybody!

Share this post


Link to post
Share on other sites
Thank you a lot cds_560! I enjoyed your submit! :-)

I remembered now! The DOS-Compilers used to work with 16-bit Interruptions..., and 2^16 = 65535 bytes

Windows work with 32 bit = 2^32 = 4294967296 bytes.

But, There''s one think i Haven''t understand yet...

Why, in the MSDN Help, it says malloc() has a Integer Parameter and not a value with 32-bit of size?

Share this post


Link to post
Share on other sites
It''s because INT''s size is 32Bit. 16Bit would be SHORT.

My old turbopascal5.0 compiler used 16Bit variables that were called integer.

In fact the term ''integer'' only specifies the type of the variable.
CHAR, SHORT, INT and LONG are integers.

I think ANSI C++ (MSVC++ is not exact ANSI C++, but close enough) defines INT as a 32Bit integertype variable.

Share this post


Link to post
Share on other sites
Wait...I know ''INT'' is 32 bit, and I know ''LONG'' is 32 bit, but I am pretty sure that ''int'' is expanded to ''short int'' unless specified as ''long'' or ''long int'' and short int is only a 16 bit variable. 8 bit vars come in form of ''char'' i think, and the compiler manages 64 bit variables under the name __int64 or INT64 instead of those nasty unions.

-----------------------------
The sad thing about artificial intelligence is that it lacks artifice and therefore intelligence.

Share this post


Link to post
Share on other sites
Int expands to the easiest unit for the processor to handle, for maximum performance. So, on a 32bit platform, int is 32bit (same as long). On a 16bit platform, int is 16bit. This was great fun converting DOS based stuff to Windows... Especially if there''s ints (read: shorts) stored in files and a program tries to read ints (read: longs) from those files...

Share this post


Link to post
Share on other sites