Why make portable game code?

Started by
80 comments, last by DarkZoulz 19 years, 11 months ago
Why on earth would you need to make seperate CD''s for distribution? If you''re dumb enough to make your media content machine dependent, you''re doomed anyway. Your executables and libraries should NEVER be too large to put multiple platforms onto a single disk. Your content typically makes up 85-90% of the distribution. If your executables are too large to fit onto a CD, you just plain did something wrong.

Linux servers are cheap. Most servers traditionally run unix anyway, so moving to Linux makes sense on the server side. Only an idiot would develop a windows-only server for a game that''s going to see wide distribution. Services like battle.net and whatnot are nice, but it''s virtually impossible for a company to maintain a large-scale network of servers without relying on end users a bit. For most end users, that means throwing together spare parts to make a cheap server for a few hundred bucks. Linux is ideal here.

Since you''re developing a linux server anyway...it''s probably not that much extra work to do a linux renderer and other code. Sure, it might be some extra work, but it doesn''t seem to have been a huge issue for the the guys at Epic, id, or Valve. Maybe they''re just better programmers than you are, though.

So, while you may lose money if you look at it as "developing for linux", you''re actually saving money by not having to run as many servers of your own. When you consider what bandwidth, servers, and the rest cost - you can see why it''s well worth the trouble to develop linux-based servers.

---------------------------Hello, and Welcome to some arbitrary temporal location in the space-time continuum.

Advertisement
quote:Original post by Oluseyi
quote:Original post by Structural
I haven''t read the whole thread, but if you can''t use 99% of your code on another platform it''s probably a mess.
Just. Not. True.

If portability isn''t a target/requirement, and platform-specific APIs are heavily used, that analysis above is just useless.

Include your caveats.


Platform specific APIs, indeed. And how many of your modules do you believe make platform specific calls? If you designed your application properly with your mind set on the future and reusability, then you probably have things like wrappers and interfaces.
Creating these wrappers and interface might seem like extra work, but in reality these help you increase your codes maintanability.
And code that is not maintainable is bad code in my opinion.
Imagine you''re searching for a bug in your networking code. If you have network code all over the place you''re in trouble no?

The only problem I foresee are in graphics. Where you make a lot of calls to your library and often use special algorithms. But then again, if you thought about your applications structure, you have all this code isolated to one specific module, with a clean interface like "drawModel(Model* m)" or something like that. Then, if you want to switch to another graphics library you only need to recode your drawModel function and you''re set.
Same for networking. You have your networking module and you tell it "sendToAll(char* msg, int len)", and what happens underwater is irrelevant for your core. IF you haven''t used berkleys socket interface you still have all network code isolated to one module, and you only need to recode that module.
Same goes for timers, threading, etc etc.
This is not "keeping portability in mind", but good design practise. Keeping things isolated to one portion of your application is ALWAYS a good idea.

quote:
O_O

Unless you dont want your multiplayer game to be scalable(aka handle a heavy load), you almost always have to use platform specific network code to get good preformance. Developing a solid platform independant networking suit is not a trivial task.

The reason is there are substantial differences between the high-preformance on various OSs. This is generally related to the fact that multithreading isnt standerdized across various OS''s either. So there are subtle issues which will drive up the Q&A testing costs.

There are always special cases when it comes to networking. If you don''t stick to berkley sockets then yes, you''re going to have a hard time porting. But also keep in mind that if you have network code all over your project, and not isolated in one module with a clean interface, that you''re making things harder for yourself. And that''s completely a design thing.
I know my network module runs on something as exotic as VXWorks as well as Win32. It did cost me one day porting it though because Unix''s "accept()" does not fall through when you close the socket, so I had to do some nasty stuff to make it close correctly.
And I''ve seen methods for multithreading for these platform with using just a few #defines. Ofcourse, this IS platform specific code, but creating threads is NOT all over you project and again isolated to one part of your application.



So, my point is, if your application has a good structure, and you coded it nicely, then porting it should not be a difficult job. It should be a nice case of "LEGO-code". *Click* one thing out, *click* one other thing in.
And then indeed 99% of your code is recyclable.
STOP THE PLANET!! I WANT TO GET OFF!!

This topic is closed to new replies.

Advertisement