It's not entirely on topic but I'm going to suggest you seriously consider adopting GNU/Linux as your development platform of choice. It's a toolchain hacker's dream, documentation on the tools is generally at hand, and it uses openly-sourced tools for pretty much everything so you can either read the source to see how they work or read the detailed specificaction to see what they're supposed to do.
You'll find MinGW is a reduced copy of the GNU environment you usually get by default on Linux, so you're probably already familiar with much of how things work. You can also bring much of what you learn back to MinGW to apply to Windows applications.
Trust me, you're going to make the switch sooner or later.
Linux uses the Extensible Link Format (ELF) for its binary objects (object files, shared libraries, executable binaries). If you learn and understand the basics of that, it will go a long way toward understanding what you would need to do to convert other binary formats (.bin, COFF, etc).
Delivering deb and rpm packages provides the most native experience. However on personal experience, maintaining these packages is a lot of work (i.e. you need to check it still works with each new Ubuntu/Debian release; sometimes it breaks or complains of something new; or some dependency that is still there but changed its version numbering system and now can't install, or forces to download 1GB worth of updates to install your software, etc etc etc) which is the reason you always have to provide a zip file just in case the deb/rpm didn't work.
If your software no longer works on an up-to-date system, it's better to find out and fix it early on. Or, you can not do it and just leave it completely broken for end users. The release of new distro versions is never a surprise and the dates are almost always known 6 months in advance, and the prerelease versions are available for many months in advance just so you can update your stuff. In the case of a commercial tool , you may find you want to update that often anyway.
Generally dependency version break their scheme because they have an ABI break. In that case, you probably want to update your packages to use the new ABI, so it's a good thing. Most important packages will also provide a coinstallable backwards-compatible version to ease the transition.
If you need to download 1 GB worth of dependencies for your software if it's properly packaged using the native package manager, then your ZIP file is going to be at least that big as well. There is no shortcut.
Installing into /opt currently seems like a rather reasonable approach since we currently need all files in the same directory. nixstaller seems pretty nice, but it bothers me somewhat that it hasn't really been updated since 2009. It also borders on the issue Bregma brought up, seems a bit like the Windows way of doing stuff.
It is the Windows way of doing stuff, but most people won't care. The purists will, but they'll either reject your product because it's not Free or they'll be really vehemently vocal in forums on the internet but can be safely ignored. The rest just want to install and use your software and don't care about such issues.
The real danger/disadvantage to having a complete bundle is (1) you will have to take care of your own security updates -- only really an issue if you have networked functionality or protect user assets in some way, and (b) if you have any external dependencies at all (eg. C or other language runtimes) your product could break on the next system upgrade. Unlike Windows, most Linux distros upgrade on a regular basis.
It really sounds like the single bundle is your best option. It does work well for Steam, although they have their own installer and automated update mechanism to alleviate the problems I outlined above. Have you considered going through Steam?
What is the best way to release a closed source commercial app for Linux? What about adding entries to their application launcher of choice (ie kickstart menu on KDE or whatever)?
Firstly, don't try to force a Windows-centric view of software distribution onto a non_window platform. That's like a book publisher finding they have a monster hit of English literature and look to expand their market to South America and ask how they can get all those people to learn to read English so they will buy the book. It tends to be the wrong question.
Part of the problem stems from the misbelief that Linux is an operating system. It's not, it's a OS kernel. The think people think of as 'Linux' is actually the distribution like Red Hat, Fedora, Debian, Ubuntu, SuSE, and myriad others: they're the OSes. You don't target to 'Linux' you target to a distribution. Few distributions are truly compatible at the package or binary level. Following the book simile, the publisher now wants to translate the work into South American to improve sales. The next step will be to complain that there should a central authority in South America to dictate what language people speak and that it should look and sound like English, just like Jesus used in the Bible.
Anyway, there are two approaches to commercial distribution on Linux.
(1) Use the native package manager for your target distribution. You'll want to specify a limited set of distributions you target, and make debs/rpms available. Users download and install the software using their native package manager, and your software uses the local version of dependent libraries. The package managers take care of making sure dependencies are installed and configured correctly. You still need to test on all your supported platforms. This is generally the preferred method.
(2) Use the Macintosh approach of creating a standalone bundle of everything you need (all binaries, executables, assets, shared libraries, the works). You will still need to specify a limited set of distributions you target and make the bundles available for download. You will probably need to write a bespoke installer, or at least a custom wrapper, and integrate it into the OS. You will still need to test on all your supported platforms. This is the method Steam uses for its games.
You should follow the freedesktop.org standards for integrating your stuff into the launch system(s). See ColinDuquesnoy's reply above.
I don't think that's actually going to do the right thing if there's anything other than US ASCII in the original string. A simple integral promotion from a multibyte UTF-8 string into Microsoft's proprietary UCS-2 wide-character Unicode variant is a fail.
Of course, if you're restricting your somain to US ASCII, you're fine.
(1) if you have one or more developers on a project, always use a revision control system (VCS). Developed as we entered the Space Age in the 1970s, this technology has been shown to both facilitate code sharing in a multi-user environment and act as an aide to escrow for contractual obligations, but have saved the bacon when Mr. Murphy stops by for a chat. Please note the "one or more." The only people who have regrets about a VCS are those who didn't use one.
(2) You will want to host the VCS on a commonly-accessible network node and provide the occasional backup of that node. It's especially important that the node be commonly-accessible for teams of more than 1. The advantage of using a separate node even for teams of one is the elimination of a single point of failure in your design, and the advantage for larger teams should be self-evident.
(3) The easiest way to set up a locally-managed VCS service is to use one of the modern distributed revision control systems (DVCS). The tools git, mercurial, and bazaar are the most popular DVCS available, and all are fairly straightforward to set up as a service.
(4) Using a third-party DVCS service is even easier than maintaining your own. Such services generally have easy set up, provide regular back ups, and often offer other services for team development, such as inline code reviews and publication (source code release downloads, wiki pages, etc). If you absolutely need privacy, there are commercial DVCS and setting up an in-house or private hosted service is not difficult.
(5) most DVCS provide a simple way to tag and/or pull a particular "snapshot" of the code as it exists at a particular moment (older nonshared revision control systems like SCCS, RCS, SVN, etc either do not provide that or have very clunky methods for doing so). This is important for advanced processes such as releasing software, bug tracking, QA, and so forth.
In short, you should use a DVCS such as git, mercurial (hg) or bazaar (bzr) to keep and share your code. You might consider using a third-party hosting service to make it easier and provide an automatic off-site backup of your most precious asset.
c standard is not saying something? or some common rule - that it is linked in or not?
"Linking" is not a concept addressed by the language standard, no. There is no requirement in the language standard that a system offer separate compilation of modules, and indeed there are embedded systems that do not.
Practically, though, most modern (post-1960s at least) linkers will normally only satisfy undefined symbols from a static archive (library). Dynamic shared objects (DLLs, .so files, .dylibs and so on) are loaded by the dynamic link-loader in their entirety, just as an executable is, but their symbol relocation tables may not be resolve until required (so-called "lazy" loading). Command-line options can be used to vary that behaviour (eg. --Wl,-whole-archive passed to GCC).
Symbols from object modules may also have unreferenced symbols stripped. That's going to depend on your linker and likely on the options passed o the linker.
1) that -march says what instruction set i should restrict compiler to use (for example setting -march=pentium3 makes my binary onlu with instructions awaliable on pentium3)
-march sets the minimum compatibility level... in this case it means Pentium III or later.
2) also i understand that -mtune says to what target the previous instructions do optymize, for example i can get p3 instructions and optymize it for core2
confusingly the docs say
"-march=cpu-type Generate instructions for the machine type cpu-type. The choices for cpu-type are the same as for -mtune. Moreover, specifying -march=cpu-type implies -mtune=cpu-type. "
I doubt if this is true - does this mean that when choicing -march=pentium3 -mtune=generic the mtune setting is discarded and this is equiwalent of
-march=pentium3 -mtune=pentium3 ? dont think so (this is confusing)
Why do you doubt it? It makes perfect sense: -march has priority. If you choose to set the minimum compatibility level, the optimizer will use that as when making choices.
1. i would like to chose resonable codeset that would be working on older
machines but also working ok on more modern ones I chose -march=pentium3 as i doubt if someone uses something older than p3 and I didnt noticed noticable change when putting something newer here (like -march=core2 - i dint notice any speedup)
While there are millions of pre-PIII machines still going into production, it's unlikely that your game will be running on them (they're things like disk controllers, routers, refrigerators, toasters, and so on). PIII is probabyl good enough, since it has PAE by default and other improvements like fast DIV, better interlocking, and extended prefetch.
It's also likely that newer architectures don't introduce new abilities that your picooptimization can take advantage of when it comes to something not CPU-bound, like a game.
2. what in general i can yet add to this commandline to speed things up ?
(or throw away some runtime or exception stuff bytes or something like that)
In general, such picooptimization is not going to make one whit of difference in a typical game. What you really need to do is hand-tune some very specific targeted benchmark programs so they show significant difference between the settings (by not really running the same code), like the magazines and websites do when they're trying to sell you something.
im using here -O2 as i not noticed difference with -O3
Hardly surprising, since most picooptimizations don't provide much noticeable difference in non-CPU-bound code. -O2 is likely good enough (and definitely better than -O1 or -O0), but -O3 has been known to introduce bad code from time to time, I always stay away from it.
i noticed that "-mfpmath=both " speeded things up (though docs say something that its dangerous didnt understand why) also (-ffast-math /
-funsafe-math-optimizations also speeded things)
Those switches end up altering the floating-point results. You may lose accuracy, and some results may vary from IEEE standards in their higher-order significant digits. If you're doing a lot of repeated floating-point calculations in which such error can propagate quickly, you will not want to choose those options. For the purposes of most games, they're probably OK. Don't enable them when calculating missile trajectories for real-life nuclear warheads. Don't forget GCC has other uses with much stricter requirements than casual game development.
I'd say that while it's fun to play with the GCC command-line options and it's a good idea to understand them, they're not really going to give you a lot of optimization oomph. You will get far more bang for your buck playing with algorithms and structuring your code and data to take advantage of on-core caching.
Also, if you haven't already, you might want to read about the GCC internalsto understand more of what's going on under the hood.
I feel the same way about playing the piano. I would really love to be able to tickle those ivories like a pro and every time I walk by it I feel a little guilty. I just hate not being able to play really well and I have ideas for some really good music, but I hate the learning and practice.
anyway this is a pitfal trap for me putting something slowing and bloating my program implicitely
there should be a large text * * * WARNING POSSIBLE CODE SLOWDOWN (reasone here) * * *
Yes, it sort of goes against the C++ philosophy of "pay only for what you use." Could be argued, however, that you're using function-local static variables so you're paying the price. That argument is getting kind of sketchy, though, because it can be countered with "but I'm not using multiple threads, so why should I pay the price?"
Be aware of letting a committee near anything, even for a minute.
i neverd heard the 'serialization' word in such sense (serialisation usually was meant saving some kind of data to disk), though this meaning is quite usable
Yes, I've run into that before. A lot of people use 'serialization' to mean streaming data, a synonym for 'marshalling'. I understand Java used that in its docs and it took off from there. Perhaps it originated from the act of sending data over a serial port (RS-232C) although we always used the term 'transmit' for that (and 'write to disk' for saving to disk, maybe 'save in text format' to be more explicit).
I'm using 'serialization' in its original meaning: enforce the serial operation of something that could potentially be performed in parallel or in simultaneous order. The usage predates the Java language and so do I. I apologize for the confusion. If any can suggest a better term, I'm open to suggestions.
how is the mechanizm it slowed my program and growed it +50KB
I suspect, without proof, that it pulled in some library code to wrap and serialize the initialization of the function-local statics, and the slowdown is because that code gets executed. Thread serialization generally requires context switches and pipeline stalls. Without knowing the code, I suspect that code path needs to be executed every time so it can check to see if the object has been initialized.