Is there a complex set of programming variances and rules depending upon what bits you program in, what compiler, architecture, etc.?

Started by
2 comments, last by Bish96048420 12 years ago
Because programming is supposed to be straightforward. But it's not straightforward when you take in to example the complication involved with compiling, how many bits a program is, how to know how many bits a program is, what the amount of bits even change, and how much RAM program's use.

I'm starting to wonder if it's a programmer's job to incorporate this, or if a whole system basically bricks the walls while they glue it.

Also, game engines for one. They didn't even use engines back when they made DOS games in the early 90s and so.

The highly complex background of classification, set of coding rules, organizing of code and memory allocation back in the 90s is just a step backwards from the even more complicated 64-bit and 128-bit architecture used today.

With so many mix-ups and variances of possible bit differences, execution, compatibility, and overall understanding of the whole process involved in such dramatic differences and confusing knowledge required in variables of logic and implementation of changes through retro DOS 16-bit support systems and high-level advanced 3-D mathematics and game engines like Ogre and such nowadays, how would anyone rationally and realistically believe that one can possibly comprehend all of these things 100% accurately and program as an expert, while excluding all of the behind the scenes nightmares like library files(.lib), other .h header files, define rules, and ability to link altogether?

Why can't I program 1 bit programs in 64-bit operating systems?

Why can't I program 64-bit operating systems in 16-bit Window's executable files(.exe)?

Why should I believe that programming as a whole is intrinsically worth the effort when I'm limited to a range of development access, file incorporation to a system, a set of rules followed by an operating system's API, and system functionality in of itself?

I should feel obligated to write my 1-bit masterpieces strictly independent from 1-bit architecture.

Isn't the whole point of computers and logical machines to allow the unimaginable?

Well, I'm not unimagining, and I'm imaging things that aren't happening.

So I guess programming fails when you add this all up.

This is excluding the fact that so many Window's OS versions are not compatible with 16-bit applications, when the system itself is four times that range.

E-Fail. If I'm not unimagining I'm not impressed by computers lackluster incorporation of functionality in this day and age.

Once I can create 1-bit applications in 64-bit OS, let me know!

Otherwise, debate against me and try to prove what I'm saying to be wrong 100%.

[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif]

Please read this as well, as it follows this post on:

[/font]http://www.gamedev.n...-say-otherwise/

Advertisement
http://en.wikipedia.org/wiki/Word_%28computer_architecture%29
Make a program that emulates such a system then. Make sure it only has instructions for boolean operations and memory access, so i can make my own superior number format instead of some predefined size float with predefined size mantissa k?

o3o

[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif]

Make a program that emulates such a system then.

[/font][/quote]

That's a very great idea! I'll look in to it.

This topic is closed to new replies.

Advertisement