Sign in to follow this  

16-bit programming HELL!

This topic is 4599 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I was sifting through some old 16-bit C++ code (helping someone new to programming and they are currently learning very old material) when I came across a problem I've never encountered before (probably because I moved to 32-bit long before then). I need to allocate a buffer larger than 64k, but every time I try, the data seems to become corrupted. I've tried everything I know: Using far/huge pointers. Setting the compiler to use the 'large' memory model. Using 'farmalloc()' instead of 'new' or 'malloc()'. Shaking my fists while threatening the compiler and cursing segments and offsets. Nothing seems to work. Can anyone please give some insight as to what I may be doing incorrectly? Thanks in advance

Share this post


Link to post
Share on other sites
Never really worked in a 16-bit environment.

But perhaps it has something to with the fact that with 16 bits, the maximum value is 65535 (0xFFFF), which is 64k? Maybe a pointer cannot handle anything bigger?

Share this post


Link to post
Share on other sites
Yes, with 16-bits it will wrap to zero after 65535. But I am using *far* pointers. Which to my knowlege, in a 16-bit environment, use 20-bits (two WORD values using that segments-and-offests crap) to access up to 1 MB of RAM.

Share this post


Link to post
Share on other sites
I found this. It may apply to your situation. Basically, it says the parameter to malloc may only be a 16 bit int, so you cannot make any buffer larger that 65535.

Share this post


Link to post
Share on other sites
Quote:
What's the actual declaration of the pointers you're using? And what compiler are you using?

I am using Turbo C++ (I don't know why that school insists on still using it). I have tried using both 'large' and 'huge' memory models. (Huge is the same as lareg, but it normalizes the pointers and avoids segment wraparound). I have tried explicitly creating pointers of various datatypes with both 'far' and 'huge' declarators. The sizeof() operator always says they are four-bytes long too. As they should be.

Quote:

I found this. It may apply to your situation. Basicly, it says the parameter to malloc may only be a 16 bit int, so you cannot make any buffer larger that 65535.

Nope. I am passing a calculation (320*240*1) as the parameter. I also tried using a compiler-specific function. 'farmalloc()' which is supposed to use the 'far' heap no matter what. Still nothing.

Could it be that the code generated by this compiler just won't work on more modern machines? It's generating 286 instructions. Al 80x86 processors *should* be backwards compatible, but, you never know...


[EDIT]
Hmmm. I just checked the error return of my code and it says 'farmalloc()' is
returning NULL. I missed that before. Not sure what it means though.

Maybe I'll just say to heck with this and use 64k chunks.
[/EDIT]

Share this post


Link to post
Share on other sites
Quote:

Nope. I am passing a calculation (320*240*1) as the parameter. I also tried using a compiler-specific function. 'farmalloc()' which is supposed to use the 'far' heap no matter what. Still nothing.


The fact that it's a calcuation makes no difference. It still creates a value which is passed to the function, and this value is probably truncated (is that the right word?) to a 16-bit value, or basically masked with 0xFFFF.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
16 bit mode only allowed access via segments of up to 64k.
All allocations of a stucture (memory block) could have only 64k as a max size.

You could use HUGE pointers (which had segment and offset components) to chain
a bunch of nodes each being 64k (seperately allocated). The HUGE pointer could be used to access across segment boundries. I seem to remember the HUGE had extra code to renormalize sgement+offset when you crossed a segment boundry when waling
(ptr++) the pointer. (look in the documentation for HUGE...)

It probably was possible to staticly allocate several contiguous 64k blocks, but the C dynamic allocator would not be guaranteed to do that with any consistancy.

Share this post


Link to post
Share on other sites
I remember Borland C++ 5 for DOS used protected mode which allows arbitrary memory allocation. Try to find any option that will make 32-bit PM executables. Other than that, you're left with huge pointers or managing segments by yourself.

Share this post


Link to post
Share on other sites
As Ilici suggests, I'd go for using protected mode - knowing about how to get around the pain of x86 real mode isn't exactly much of a valuable skill in this day and age...

You could persuade the school to save a lot of pain and use the free Open Watcom http://www.openwatcom.com/, that comes with protected mode DOS extenders.

Watcom used to be a commercial product back in those days, and was particularly well respected by game developers.

Share this post


Link to post
Share on other sites
Quote:

The fact that it's a calcuation makes no difference. It still creates a value which is passed to the function, and this value is probably truncated (is that the right word?) to a 16-bit value, or basically masked with 0xFFFF.

Yeah. I realized what you were saying a little while later. Anyway. I tried farmalloc(long), an the function failed. It was late and I didn't feel like fixing my post.

Quote:

As Ilici suggests, I'd go for using protected mode - knowing about how to get around the pain of x86 real mode isn't exactly much of a valuable skill in this day and age...

This is simply not a posiblity. The school insists on using this ancient compiler and my friend has no other alternative. The compiler itself existed before 32-bit protected mode. It MUST be 16-bit code. Personally, I can't stand using the thing, and would have never touched it again if not for this situation. I got my fill of 16-bit a while back.

I tried talking to the teacher several years ago when I attended. I suggested several 32-bit compilers. But I can see that never got very far. (BTW: It's only an intro high school class, so it's not like it's earth-shatteringly important to them)

Well, thanks anyways, folks, for the time and effort.

Share this post


Link to post
Share on other sites
Using 16 Bit mode, you can only manage data structures with a size of 64KB maximum. However when using far pointers, you may create multiple data structures of a size <= 64kb.

Maybe you can call DOS functions to allocate memory chunks greater than 64kb. Dis you try this ?
Read here.

Does the runtime library of turbo C manage more than 64kb heap ? If you do not need the memory to be contigous, you may write a simple helper class that allocates multiple blocks via malloc (and using far pointers) to provide more than 64kb memory chunk.

Share this post


Link to post
Share on other sites
From the Turbo C++ documentation: about 'far'

far <type modifier>

Forces pointers to be far, generates function code for a far call
and far return.

<type> far <pointer-definition>;
OR
<type> far <function-definition>


The first version of far declares a pointer to be two words with a range
of 1 megabyte. This type modifier is usually used when compiling in the
tiny, small, or compact models to force pointer to be far.

...



From the Turbo C++ documentation: about farmalloc()

void far *farmalloc(unsigned long nbytes);

farmalloc allocates a block of memory nbytes long from the far heap.
For allocation from the far heap, note that

*All available RAM can be allocated
*blocks larger that 64k can be allocated
*far pointers are used to access the allocated blocks

...



I've already coded a simple solution to split the data into chunks. I was trying to avoid it because I'm writing a software graphics library and with such an old compiler, I need all the speed I can get.

Share this post


Link to post
Share on other sites
So does farmalloc work as expected ? Then you should stick to it. Otherwise try the direct call to DOS.

Share this post


Link to post
Share on other sites
Quote:
Original post by lack o comments
Nope. I am passing a calculation (320*240*1) as the parameter. I also tried using a compiler-specific function. 'farmalloc()' which is supposed to use the 'far' heap no matter what. Still nothing.
This is exaclty what I expected to see in your code. If you hadn't posted this I would have asked you.

I would have thought that there'd be more 16-bit programmers around that would know this. Anyway, I know exactly what your problem is because I program for a 16-bit machine regularly and have made the same mistake.
Here it is: Your calculation 320 as a WORD, multiplied by 240 as a WORD, multiplied by 1 as a WORD.
The result is stored in a WORD!!! The compiler does not know that the result of the multiplication is too big for a WORD.

Instead write:
320UL*240UL*U1L
'UL' means unsigned long when put at the end of a constant.
or:
320L*240L*sizeof(char)
if you prefer (assuming that I've guessed correctly with the 1 was for[smile]). 'L' means long when put at the end of a constant.

In may case I did this:
(NUM_RECS * sizeof(myStruct))
but NUM_RECS is 1000 and sizeof returns a WORD, so if the size of myStruct > 65 you get a WORD times a WORD again. Had to make it 1000L.

Now, how about declaring that 320L as a constant or #define eh! (tsk tsk)

Share this post


Link to post
Share on other sites
Actually, the literals were only to let everyone know how much memory I was allocating. I always specify sign and size when working in 16-bit (learned that the hard way long ago). The real equation used variables of type 'unsigned short int'. However, I found part of the mistake. Here is how it looked:

g_BackBuffer.Data = (BYTE far*)farmalloc((DWORD)(g_BackBuffer.Width*g_BackBuffer.Height*g_BackBuffer.Bpp));

Unlike my newer compilers, Turbo didn't know how to cast that entire equation. I tried putting a cast in front of each variable and the function returned successfully.

The problem of corruption still remains though. As a test, I allocated 76800 bytes, set all of the values to zero and the last one to 101. I then looped through the array until I found the byte with 101. For some strange reason, it's always byte 11163. Except when I use huge poiinters. Then the byte can range by a few hundred. [headshake]

BTW: Someone mentioned DOS calls. Unfortunately, DOS only recognizes the first 640k. So, I'm still screwed ;)

Share this post


Link to post
Share on other sites

This topic is 4599 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this