Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 09 Dec 2005
Offline Last Active Today, 05:30 AM

Posts I've Made

In Topic: C++ files organisation

Today, 05:32 AM

I work with oodles of free software projects.  I've seen plenty with a separate includes/ directory and many that have them combined.  I've seen many with a deep or broad hierarchy and the same with everything dumped into a single subdirectory.  Technically it makes no difference and there is no de facto or de jure standard, it's entirely up to the taste of the most vocal or dominant developer.
Since packaging project source inevitably means installing into a staging directory, that isn't relevant.  Since installing means copying out of the sources into an installation directory, that's not relevant.
What is relevant is when someone comes along to try and read and understand the code, it's a lot easier when there isn't a separate include/ directory in the project, and the header and other sources files are all combined in one hierarchy.  I've noticed the most vocal proponents of the separate include/ hierarchy tend to be those who spend no time maintaining other people's code.  There is also no argument that in larger projects readability is improved by namespacing components into separate subdirectories and all include references are relative to the top of the hierarchy.  If each component produces a static convenience library (or, if required, a shared library) that also makes your unit testing easier.

In Topic: hyphotetical raw gpu programming

11 July 2014 - 04:37 PM

You guys are wayyy over my head with this stuff.  I'm kinda with fir on this; I only have a vague notion of what a GPU does, but I figure it's like he says, a vast array of memory as data input, a similar vast array as output, and a set of processors that read and process instructions from yet another array of memory to transform the input to the output.  Is that not the case?


Do all the processing units always work in lock-step or can they be divided into subgroups each processing a different program on different input sets?


Is there a separate processor that divides up the data and feeds it or controls the main array of processors as appropriate?


I mean, I can describe how a traditional CPU works down to the NAND gate level (and possibly further), but I'd be interested in learning about GPU internals more.

In Topic: <o> Is the "STEM Shortage" a myth in The USA ? <o>

11 July 2014 - 05:45 AM

Gods grant me the patience to land the perfect job in my field NOW.   Dammit, I showed up to most of my classes, where's my damn trophy?

In Topic: relocation unknown

09 July 2014 - 06:33 AM

(1)- branching is usually relative so its reallocable, acces to local variables are also relative so such kind of procedure do not need realocation fixup table

Right, this is also known as position independent code (PIC).  Not all CPUs support PIC (they do not have a register-indexed memory read instruction), but the ones targeted by Microsoft do.  Some CPUs support only PIC.  It's a wide world and you can compile C code to almost anything in it.
PIC requires the dedicated use of one or more registers.  The old Macintosh binary format (pre-OS X) dedicated the 68k's a4 for locals and a5 for globals.  The Intel architecture has a critical shortage of registers, so compilers tend to not use PIC for globals and locals use the already-dedicated SP register.

(2) I suspect (becouse im not sure as to this) that code 1) that uses calls 2) that references to global data - need fixups
as i suspect calls are not relative, also global data acces is not relative

Yes.  Well, there's a bit of confusion here.  External references that are not relative and are going to need some kind of resolution before runtime.  It's possible to have non-external globals that are not relative, and can use absolute addresses.  It's a little more complicated than that if you're doing partial linking (eg. separate compilation).

(3) when wathing some disasembly of .o .obj files etc and stuf i never see relocation tables listed - Do most such of this object files has such fixup tables build-in or no? is there a way of listing them or something?

Depending on your tool, you may need to request the relocation tables be dumped explicitly.
If you were on Linux using the ELF format, running 'readelf -aW' on a .o, .so, or binary executable would reveal much.  Pipe it through a pager.

(4) if i understand correctly if some .obj module references 17 symbols

(I think thay may be both external or internal, (say 17 = for example 7 external function, 7 internal functions, 2 internal static data, 1 external static data ) it would be easiest to define 17 pointers and do not do a rewritting of each move that references do data and each call that calls the function but only fill up those pointers at link time and do indirectional (thru the pointer) addressings (?)

but loaders prefer to physically rewrite each immediate reference for efficiency reasons?

The smaller the relocation table at load time, the faster loads will be.
Your static linker will do its best to resolve symbols at static link time.  If it can't, the symbol goes into the relocation table for resolution at load time.  Depending on settings, the symbols get resolved immediately or when needed (lazy loading).
Different binary standards resolve symbols in different ways.  An ELF file (eg. Linux) has a global lookup table that gets loaded and patched as required.  A COFF file (eg. Windows) has a jump table that gets statically linked and backpatched at load time (the .LIB file for a .DLL). A MACH-O file (eg. Mac OS X) behaves much like an ELF file, but in an incompatible way.


Some required reading for someone trying to understand this is the seminal paper by Ulrich Drepper on shared libraries.  It's a little bit Linux-specific but many of the concepts can be generalized, and I think it's just the sort of thing you might be looking for.  If not, it's still an interesting read.

In Topic: c++ function pointers

09 July 2014 - 06:04 AM

I usually avoid them for readability.

That's odd.  They were explicitly developed for and I use them for enhanced readability.


Then again, I second the use of std::function.