Jump to content
  • Advertisement
Sign in to follow this  
EqualityAssignment

Reasons why applications are non-portable

This topic is 2831 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

First off, I'm honestly sorry for starting a thread with such a vague topic. What sorts of things make an application non-portable?

I know machine code compiled for one processor family won't execute on another, so that's one dimension of non-portability. Even a pretty basic command line app (say, diff) needs to be compiled for each processor family separately right? But as long as they were all using Pentiums, users running Ubuntu, OSX, and Windows 7 could all use the same diff binary? (Of course none of that matters for languages that run on a VM or interpreter.) More complex applications might want to start a window, which will require at least a bit of OS dependent code. (The *nix world has multiple desktop managers, so maybe desktop-manager-dependent code over there?) (Generally a call into something like the winapi right.) If the application wants to follow the standard format for applications in its OS (minimize, maximize, close buttons in top right; tab moves you across various GUI controls), or generate graphics on the GPU then it will need to rely on the api even more (as opposed to ignoring the OS' conventions and drawing its GUI software-side (which still requires some kind of platform dependent blit call and window maintenance, like working the event pump.)) Reading and writing files is pretty portable, though directory structures aren't, at least not across OSes. Storing configuration settings (registry vs. config files in %HOME%/ (I think?)) isn't. I guess TCP and UDP sockets implement the Berkley sockets API pretty much all the time, so that's portable, and your always on your own for raw sockets if you want to punish yourself with their use. I don't know anything about interfacing with hardware like USB devices, or printers (or potentially USB printers), but I imagine getting at the devices is non-portable, though maybe USB devices present a consistent API across OSs once you get to them.

O, and then different compilers for the same compiled language might do things a little bit differently (gcc vs. Borland's compiler)? I'm not even going to think about consoles, smart phones, or microprocessors ...

Is this even half right? I first start programming several years ago, though I would never claim to be particularly good at it. But somehow this all still makes my head spin ... Perhaps this would start to click if I spent some more time with another operating system (Windows XP here, for as long as I've known enough to tell you what an operating system is, though I dabbled a bit with http://www.damnsmalllinux.org/ . (yeah, not a great distro to pick for a first experience with Linux, I know, but ... it was cool! It fit on my USB drive ...))

Share this post


Link to post
Share on other sites
Advertisement
Operating system.

Binary code can be automatically generated for similar generations of processors, some instructions can be emulated, even the whole x86 instruction set these days is running on a 'VM' translating it into what is effectively RISC microcode. While not the best solution, it can be done.


OS, on the other hand, is literally made up on the spot. Definition of what a process or a thread is lies with whoever writes the kernel scheduler. Definition of what malloc does as well. So do file systems and everything else.


Linux and Unix world can, given enough source code or ABI, be made interoperable via POSIX. Not necessarily portable as such, but adequately similar to make porting somewhat trivial.
Similar holds for Windows ecosystem, which is more or less portable back to 2000, perhaps even 95 and in some extreme cases even back to 3.1.

But these are all made up conventions. When there was a need for certain service, facility or abstraction, authors of the OS had to design it. Since there was nothing to lean on, they had to invent stuff (at some point, GPU HAL didn't exist).

generate graphics on the GPU then it will need to rely on the api[/quote]This is a voluntary choice. Before standardized driver models existed, each application had to directly access the hardware. It worked, when there were only 7 graphics cards in existence, each with about 100 interrupts.

Later, VESA introduced some standards which are still somewhat used today. But for anything beyond basics, one needs very complicated and painfully proprietary drivers provided by vendor. And those target standardized ABI, sometimes API or HAL or whatever TLA is used. The effort needed to support each such driver is too big and brings too little gain to anyone but hardware manufacturer to do it. So they target standards. Same for all hardware, not just graphics.

consistent API[/quote]API is a very high-level construct. It requires settling on language (usually C), the C standard library parts (malloc, typedefs), the OS kernel structures and driver model and just about everything else. What is usually called an API is built on top of an existing OS API and language as well as all the facilities it provides.

TCP and UDP sockets implement the Berkley sockets[/quote]IP networking stack doesn't need Berkeley sockets, that is just one particular definition of networking in POSIX. Most OSes provide something similar.

But to make two devices talk over internet, one doesn't need Berkeley sockets, just sufficient code to generate correct wire traffic for particular protocol. API just makes programming more standardized. But it could be anything else.

Share this post


Link to post
Share on other sites
Different operating systems also use different formats for the executables. They'll also provide their own standard library implementations - I'm not sure why you think reading and writing files is portable, unless I'm misunderstanding. The standard library homogenizes the different interfaces provided, but that library implementation has to be specialized.

Share this post


Link to post
Share on other sites
The above is all correct, but maybe I can help make it a little more concrete.

One difference is the executable formats. A "binary" executable file is actually an archive that gives the OS instructions on how to load the program into memory. This includes: which sections of the file should be loaded where; what access level they should use (global variables should be writable, code shouldn't); what shared libraries to link to, and how to patch up the references to them; etc. Linux uses a format called ELF to store all this information, Windows uses a different one called PE, and other operating systems no doubt have their own standards. (On Linux, you can use the "objdump" program to inspect the full headers of a binary file.)

If all your code does is computation and memory access, this is all that matters. But a useful program also needs to interact with resources provided by the OS to communicate with the outside world. The low-level details of this are also OS-specific. On Linux, any time you want to ask the kernel to do something like read a file, you need to do a system call. In x86 Linux, this is done by putting values in certain registers and executing an "int 0x80" instruction that jumps to a special privileged entry point. Other operating systems and processor types do it differently.

If you're programming in a high-level language, this level of detail is normally hidden from you. For instance, Linux implements the POSIX API that provides functions like open(), read(), and write(). When you compile a C program, the linker looks in the standard C library and plugs in the assembly code that performs the underlying syscall. So if you stick to the standard APIs, your binaries won't be portable, but your source code will be. You can just compile on a different platform and the standard library for that machine.

This only works for POSIX-compliant OSes, though. So for instance, if you try to transplant a program from Linux to Windows, this way, it will fail because Windows doesn't know anything about the open() function. It has something called OpenFile() that mostly does the same thing, but a bit differently. There are also bigger differences: creating a new process in Unix involves the fork() function, which simply has no direct equivalent in Win32. And don't even get started on the differences between GUI systems!

If you want to write a cross-platform program, the usual answer is to use libraries that handle the platform differences for you. For instance, you can use the POSIX API and simulate it with Cygwin, although that's a bit clunky. For GUIs, you can link with GTK, Qt or WxWidgets, each of which has multiple "back-ends" for different windowing systems.

Share this post


Link to post
Share on other sites
I didn't know about the different formats for executables. I knew they had to have a data section and a code section, but I thought the only real differences would be the instruction set.

This is a voluntary choice. Before standardized driver models ... 100 interrupts.

Later, VESA introduced some ... for all hardware, not just graphics.[/QUOTE]
Ah I think I get it. The OS supplies an API that abstracts away the drivers, and if you want interoperability, you use another API on top that tries to abstract away the OS' API.

Different operating systems also use different formats for the executables. They'll also provide their own standard library implementations - I'm not sure why you think reading and writing files is portable, unless I'm misunderstanding. The standard library homogenizes the different interfaces provided, but that library implementation has to be specialized.[/QUOTE]

Err ... >.>. Guess I was way off the mark here. (And yeah, I looked at the standard libraries and didn't see the specialization.) ty.

If you're programming in a high-level language, this level of detail is normally hidden from you. For instance, Linux implements the POSIX API that provides functions like open(), read(), and write(). When you compile a C program, the linker looks in the standard C library and plugs in the assembly code that performs the underlying syscall.[/QUOTE]

Ah, gotcha. The assembly that gets "plugged in" is in a library, potentially shared (like msvcrt.dll on Windows). And if, like you said, you were to write your source for Linux, using the POSIX API, then compile against Cygwin, Cygwin would have its own library to plug in the right assembly for Windows (cygwin1.dll).


Spent a bunch of time looking terms up. Things make more sense now. I guess I had fairly murky picture of how a standard library would actually _do_ the things it did, and that was preventing me from seeing how it dealt with incompatibility. (Actually I remember opening winapi headers (yeah, that's not a standard library, I guess I'm sort of wandering) in my c++ compiler years ago, when I had just recently started programming, hoping to figure out why this <winapi> thing I was importing could do stuff, like draw windows, that I couldn't. Discovering that there wasn't a useful cpp file behind the header was frustrating.)

Thanks for making some things clearer guys :).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!