Jump to content

  • Log In with Google      Sign In   
  • Create Account

Bregma

Member Since 09 Dec 2005
Offline Last Active Yesterday, 07:24 PM

#5173078 Can't solve problems without googling?

Posted by on 12 August 2014 - 07:23 AM

A tourist had just arrived in Manhattan with tickets to a concert and didn't know his way around.  The curtain time for the show was fast approaching and the tourist was becoming increasing desperate to find the venue.

 

Finally, he approached a man walking swiftly and carrying a violin case, a sign he might have knowledge of the local entertainment industry.

 

"Excuse me," said the tourist, and the hurrying man looked up.

 

"Can you tell me how to get to Carnegie Hall?" queried the lost and desperate man.

 

The man with the violin case paused briefly and stared intensely at the tourist.  After a beat, he spat "Practice!!!" and hurried on his way.

 

To get good at something, you practise.  It's that simple.  Learning and repeating what others have done is one of the more effective ways of practising.




#5168624 C++ files organisation

Posted by on 23 July 2014 - 05:32 AM

I work with oodles of free software projects.  I've seen plenty with a separate includes/ directory and many that have them combined.  I've seen many with a deep or broad hierarchy and the same with everything dumped into a single subdirectory.  Technically it makes no difference and there is no de facto or de jure standard, it's entirely up to the taste of the most vocal or dominant developer.
 
Since packaging project source inevitably means installing into a staging directory, that isn't relevant.  Since installing means copying out of the sources into an installation directory, that's not relevant.
 
What is relevant is when someone comes along to try and read and understand the code, it's a lot easier when there isn't a separate include/ directory in the project, and the header and other sources files are all combined in one hierarchy.  I've noticed the most vocal proponents of the separate include/ hierarchy tend to be those who spend no time maintaining other people's code.  There is also no argument that in larger projects readability is improved by namespacing components into separate subdirectories and all include references are relative to the top of the hierarchy.  If each component produces a static convenience library (or, if required, a shared library) that also makes your unit testing easier.




#5166313 hyphotetical raw gpu programming

Posted by on 11 July 2014 - 04:37 PM

You guys are wayyy over my head with this stuff.  I'm kinda with fir on this; I only have a vague notion of what a GPU does, but I figure it's like he says, a vast array of memory as data input, a similar vast array as output, and a set of processors that read and process instructions from yet another array of memory to transform the input to the output.  Is that not the case?

 

Do all the processing units always work in lock-step or can they be divided into subgroups each processing a different program on different input sets?

 

Is there a separate processor that divides up the data and feeds it or controls the main array of processors as appropriate?

 

I mean, I can describe how a traditional CPU works down to the NAND gate level (and possibly further), but I'd be interested in learning about GPU internals more.




#5165792 relocation unknown

Posted by on 09 July 2014 - 06:33 AM

(1)- branching is usually relative so its reallocable, acces to local variables are also relative so such kind of procedure do not need realocation fixup table

Right, this is also known as position independent code (PIC).  Not all CPUs support PIC (they do not have a register-indexed memory read instruction), but the ones targeted by Microsoft do.  Some CPUs support only PIC.  It's a wide world and you can compile C code to almost anything in it.
 
PIC requires the dedicated use of one or more registers.  The old Macintosh binary format (pre-OS X) dedicated the 68k's a4 for locals and a5 for globals.  The Intel architecture has a critical shortage of registers, so compilers tend to not use PIC for globals and locals use the already-dedicated SP register.

(2) I suspect (becouse im not sure as to this) that code 1) that uses calls 2) that references to global data - need fixups
as i suspect calls are not relative, also global data acces is not relative

Yes.  Well, there's a bit of confusion here.  External references that are not relative and are going to need some kind of resolution before runtime.  It's possible to have non-external globals that are not relative, and can use absolute addresses.  It's a little more complicated than that if you're doing partial linking (eg. separate compilation).

(3) when wathing some disasembly of .o .obj files etc and stuf i never see relocation tables listed - Do most such of this object files has such fixup tables build-in or no? is there a way of listing them or something?

Depending on your tool, you may need to request the relocation tables be dumped explicitly.
 
If you were on Linux using the ELF format, running 'readelf -aW' on a .o, .so, or binary executable would reveal much.  Pipe it through a pager.

(4) if i understand correctly if some .obj module references 17 symbols

(I think thay may be both external or internal, (say 17 = for example 7 external function, 7 internal functions, 2 internal static data, 1 external static data ) it would be easiest to define 17 pointers and do not do a rewritting of each move that references do data and each call that calls the function but only fill up those pointers at link time and do indirectional (thru the pointer) addressings (?)

but loaders prefer to physically rewrite each immediate reference for efficiency reasons?

The smaller the relocation table at load time, the faster loads will be.
 
Your static linker will do its best to resolve symbols at static link time.  If it can't, the symbol goes into the relocation table for resolution at load time.  Depending on settings, the symbols get resolved immediately or when needed (lazy loading).
 
Different binary standards resolve symbols in different ways.  An ELF file (eg. Linux) has a global lookup table that gets loaded and patched as required.  A COFF file (eg. Windows) has a jump table that gets statically linked and backpatched at load time (the .LIB file for a .DLL). A MACH-O file (eg. Mac OS X) behaves much like an ELF file, but in an incompatible way.

 

Some required reading for someone trying to understand this is the seminal paper by Ulrich Drepper on shared libraries.  It's a little bit Linux-specific but many of the concepts can be generalized, and I think it's just the sort of thing you might be looking for.  If not, it's still an interesting read.




#5165788 c++ function pointers

Posted by on 09 July 2014 - 06:04 AM


I usually avoid them for readability.

That's odd.  They were explicitly developed for and I use them for enhanced readability.

 

Then again, I second the use of std::function.




#5165522 raw binary to obj conversion

Posted by on 08 July 2014 - 05:42 AM

It's not entirely on topic but I'm going to suggest you seriously consider adopting GNU/Linux as your development platform of choice.  It's a toolchain hacker's dream, documentation on the tools is generally at hand, and it uses openly-sourced tools for pretty much everything so you can either read the source to see how they work or read the detailed specificaction to see what they're supposed to do.

 

You'll find MinGW is a reduced copy of the GNU environment you usually get by default on Linux, so you're probably already familiar with much of how things work.  You can also bring much of what you learn back to MinGW to apply to Windows applications.

 

Trust me, you're going to make the switch sooner or later.

 

Linux uses the Extensible Link Format (ELF) for its binary objects (object files, shared libraries, executable binaries).  If you learn and understand the basics of that, it will go a long way toward understanding what you would need to do to convert other binary formats (.bin, COFF, etc).




#5164594 Installer for Linux?

Posted by on 03 July 2014 - 11:00 AM

Delivering deb and rpm packages provides the most native experience. However on personal experience, maintaining these packages is a lot of work (i.e. you need to check it still works with each new Ubuntu/Debian release; sometimes it breaks or complains of something new; or some dependency that is still there but changed its version numbering system and now can't install, or forces to download 1GB worth of updates to install your software, etc etc etc) which is the reason you always have to provide a zip file just in case the deb/rpm didn't work.

If your software no longer works on an up-to-date system, it's better to find out and fix it early on.  Or, you can not do it and just leave it completely broken for end users.  The release of new distro versions is never a surprise and the dates are almost always known 6 months in advance, and the prerelease versions are available for many months in advance just so you can update your stuff.  In the case of a commercial tool , you may find you want to update that often anyway.
 
Generally dependency version break their scheme because they have an ABI break.  In that case, you probably want to update your packages to use the new ABI, so it's a good thing.  Most important packages will also provide a coinstallable backwards-compatible version to ease the transition.
 
If you need to download 1 GB worth of dependencies for your software if it's properly packaged using the native package manager, then your ZIP file is going to be at least that big as well.  There is no shortcut.




#5164535 Installer for Linux?

Posted by on 03 July 2014 - 05:09 AM

Installing into /opt currently seems like a rather reasonable approach since we currently need all files in the same directory. nixstaller seems pretty nice, but it bothers me somewhat that it hasn't really been updated since 2009. It also borders on the issue Bregma brought up, seems a bit like the Windows way of doing stuff.

It is the Windows way of doing stuff, but most people won't care.  The purists will, but they'll either reject your product because it's not Free or they'll be really vehemently vocal in forums on the internet but can be safely ignored.  The rest just want to install and use your software and don't care about such issues.

 

The real danger/disadvantage to having a complete bundle is (1) you will have to take care of your own security updates -- only really an issue if you have networked functionality or protect user assets in some way, and (b) if you have any external dependencies at all (eg. C or other language runtimes) your product could break on the next system upgrade.  Unlike Windows, most Linux distros upgrade on a regular basis.

 

It really sounds like the single bundle is your best option.  It does work well for Steam, although they have their own installer and automated update mechanism to alleviate the problems I outlined above.  Have you considered going through Steam?




#5164299 Installer for Linux?

Posted by on 02 July 2014 - 06:41 AM

What is the best way to release a closed source commercial app for Linux? What about adding entries to their application launcher of choice (ie kickstart menu on KDE or whatever)?

Firstly, don't try to force a Windows-centric view of software distribution onto a non_window platform.  That's like a book publisher finding they have a monster hit of English literature and look to expand their market to South America and ask how they can get all those people to learn to read English so they will buy the book.  It tends to be the wrong question.
 
Part of the problem stems from the misbelief that Linux is an operating system.  It's not, it's a OS kernel.  The think people think of as 'Linux' is actually the distribution like Red Hat, Fedora, Debian, Ubuntu, SuSE, and myriad others:  they're the OSes.  You don't target to 'Linux' you target to a distribution.  Few distributions are truly compatible at the package or binary level.  Following the book simile, the publisher now wants to translate the work into South American to improve sales.  The next step will be to complain that there should a central authority in South America to dictate what language people speak and that it should look and sound like English, just like Jesus used in the Bible.
 
Anyway, there are two approaches to commercial distribution on Linux.

 

(1) Use the native package manager for your target distribution.  You'll want to specify a limited set of distributions you target, and make debs/rpms available.  Users download and install the software using their native package manager, and your software uses the local version of dependent libraries.  The package managers take care of making sure dependencies are installed and configured correctly.  You still need to test on all your supported platforms.  This is generally the preferred method.

 

(2) Use the Macintosh approach of creating a standalone bundle of everything you need (all binaries, executables, assets, shared libraries, the works).  You will still need to specify a limited set of distributions you target and make the bundles available for download.  You will probably need to write a bespoke installer, or at least a custom wrapper, and integrate it into the OS.  You will still need to test on all your supported platforms.  This is the method Steam uses for its games. 

 

You should follow the freedesktop.org standards for integrating your stuff into the launch system(s).  See ColinDuquesnoy's reply above.




#5164126 C++ Multi-Byte - prepare string of extensions for OpenFileDialog

Posted by on 01 July 2014 - 03:15 PM

std::transform(str.begin(),str.end(),back_inserter(ext),[](char t) -> wchar_t { return t == ';' ? 0 : t; });

I don't think that's actually going to do the right thing if there's anything other than US ASCII in the original string. A simple integral promotion from a multibyte UTF-8 string into Microsoft's proprietary UCS-2 wide-character Unicode variant is a fail.

Of course, if you're restricting your somain to US ASCII, you're fine.




#5163807 How do multyple people write code for one project?

Posted by on 30 June 2014 - 06:13 AM

(1) if you have one or more developers on a project, always use a revision control system (VCS).  Developed as we entered the Space Age in the 1970s, this technology has been shown to both facilitate code sharing in a multi-user environment and act as an aide to escrow for contractual obligations, but have saved the bacon when Mr. Murphy stops by for a chat.  Please note the "one or more."  The only people who have regrets about a VCS are those who didn't use one.

 

(2) You will want to host the VCS on a commonly-accessible network node and provide the occasional backup of that node.  It's especially important that the node be commonly-accessible for teams of more than 1.  The advantage of using a separate node even for teams of one is the elimination of a single point of failure in your design, and the advantage for larger teams should be self-evident.

 

(3) The easiest way to set up a locally-managed VCS service is to use one of the modern distributed revision control systems (DVCS).  The tools git, mercurial, and bazaar are the most popular DVCS available, and all are fairly straightforward to set up as a service.

 

(4) Using a third-party DVCS service is even easier than maintaining your own.  Such services generally have easy set up, provide regular back ups, and often offer other services for team development, such as inline code reviews and publication (source code release downloads, wiki pages, etc).  If you absolutely need privacy, there are commercial DVCS and setting up an in-house or private hosted service is not difficult.

 

(5) most DVCS provide a simple way to tag and/or pull a particular "snapshot" of the code as it exists at a particular moment (older nonshared revision control systems like SCCS, RCS, SVN, etc either do not provide that or have very clunky methods for doing so).  This is important for advanced processes such as releasing software, bug tracking, QA, and so forth.

 

In short, you should use a DVCS such as git, mercurial (hg) or bazaar (bzr) to keep and share your code.  You might consider using a third-party hosting service to make it easier and provide an automatic off-site backup of your most precious asset.




#5163422 speeding this with sse or sse intrinsics

Posted by on 28 June 2014 - 06:05 AM

The GCC documentation has some very useful examples.




#5162436 question about linking

Posted by on 23 June 2014 - 06:02 PM

c standard is not saying something? or some common rule - that it is linked in or not?

"Linking" is not a concept addressed by the language standard, no.  There is no requirement in the language standard that a system offer separate compilation of modules, and indeed there are embedded systems that do not.

 

Practically, though, most modern (post-1960s at least) linkers will normally only satisfy undefined symbols from a static archive (library).  Dynamic shared objects (DLLs, .so files, .dylibs and so on) are loaded by the dynamic link-loader in their entirety, just as an executable is, but their symbol relocation tables may not be resolve until required (so-called "lazy" loading).  Command-line options can be used to vary that behaviour (eg. --Wl,-whole-archive passed to GCC).

 

Symbols from object modules may also have unreferenced symbols stripped.  That's going to depend on your linker and likely on the options passed o the linker.




#5162082 -march=pentium3 -mtune=generic -mfpmath=both ?

Posted by on 22 June 2014 - 07:37 AM

1) that -march says what instruction set i should restrict compiler to use (for example setting -march=pentium3 makes my binary onlu with instructions awaliable on pentium3)

-march sets the minimum compatibility level... in this case it means Pentium III or later.


2) also i understand that -mtune says to what target the previous instructions do optymize, for example i can get p3 instructions and optymize it for core2
 
confusingly the docs say
 
"-march=cpu-type Generate instructions for the machine type cpu-type. The choices for cpu-type are the same as for -mtune. Moreover, specifying -march=cpu-type implies -mtune=cpu-type. "
 
I doubt if this is true - does this mean that when choicing -march=pentium3 -mtune=generic the mtune setting is discarded and this is equiwalent of 
-march=pentium3 -mtune=pentium3 ? dont think so (this is confusing)

Why do you doubt it? It makes perfect sense: -march has priority. If you choose to set the minimum compatibility level, the optimizer will use that as when making choices.

1. i would like to chose resonable codeset that would be working on older
machines but also working ok on more modern ones  I chose -march=pentium3 as i  doubt if someone uses something older than p3 and I didnt noticed noticable change when putting something newer here (like -march=core2 - i dint notice any speedup)

While there are millions of pre-PIII machines still going into production, it's unlikely that your game will be running on them (they're things like disk controllers, routers, refrigerators, toasters, and so on). PIII is probabyl good enough, since it has PAE by default and other improvements like fast DIV, better interlocking, and extended prefetch.

It's also likely that newer architectures don't introduce new abilities that your picooptimization can take advantage of when it comes to something not CPU-bound, like a game.

2. what in general i can yet add to this commandline to speed things up ?
(or throw away some runtime or exception stuff bytes or something like that)

In general, such picooptimization is not going to make one whit of difference in a typical game. What you really need to do is hand-tune some very specific targeted benchmark programs so they show significant difference between the settings (by not really running the same code), like the magazines and websites do when they're trying to sell you something.

im using here -O2 as i not noticed difference with -O3

Hardly surprising, since most picooptimizations don't provide much noticeable difference in non-CPU-bound code. -O2 is likely good enough (and definitely better than -O1 or -O0), but -O3 has been known to introduce bad code from time to time, I always stay away from it.

i noticed that "-mfpmath=both " speeded things up (though docs say something that its dangerous didnt understand why) also (-ffast-math /
-funsafe-math-optimizations also speeded things)

Those switches end up altering the floating-point results. You may lose accuracy, and some results may vary from IEEE standards in their higher-order significant digits. If you're doing a lot of repeated floating-point calculations in which such error can propagate quickly, you will not want to choose those options. For the purposes of most games, they're probably OK. Don't enable them when calculating missile trajectories for real-life nuclear warheads. Don't forget GCC has other uses with much stricter requirements than casual game development.

I'd say that while it's fun to play with the GCC command-line options and it's a good idea to understand them, they're not really going to give you a lot of optimization oomph. You will get far more bang for your buck playing with algorithms and structuring your code and data to take advantage of on-core caching.

Also, if you haven't already, you might want to read about the GCC internalsto understand more of what's going on under the hood.




#5161151 Re-learning C++ and some help with learning it.

Posted by on 17 June 2014 - 02:21 PM

I feel the same way about playing the piano. I would really love to be able to tickle those ivories like a pro and every time I walk by it I feel a little guilty. I just hate not being able to play really well and I have ideas for some really good music, but I hate the learning and practice.

Is there an easier way to get to Carnegie Hall?




PARTNERS