Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 09 Dec 2005
Offline Last Active Today, 03:30 PM

#5177968 Did I begin learning late?

Posted by on 03 September 2014 - 09:04 PM

When I read the thread title I thought the man was 50 years old...

Careful, I don't think I started learning Python until I was 50 years old and now I use it frequently in my day job.  By the time you're old enough to know you don't know everything, you already know you're not too old to learn new stuff.

#5177561 Did I begin learning late?

Posted by on 01 September 2014 - 08:14 PM

By age 15 there are already irreversible changes taking place in your body and even hormone shots will have limited effect in helping you become a Python programmer.  Perhaps with the help of a really good therapist and the love and support of your family you could be happy doing JavaScript.  In fact, many devops lead happy and fulfilling lives and nobody even knows what kind of foundation classes they have hidden under their every-day wardrobe.


It's a good thing you haven't chosen to become a doctor or a lawyer;  they need to have advanced degrees before they can even speak full sentences.  Don't get me started on how the life of a priest begins at conception, and circus clowns have to pie before they're even a twinkle in their father's eye.


Seriously, where did this weird ageist meme come from?

#5175456 Are degrees worth it for Level / Environment design areas?

Posted by on 22 August 2014 - 07:03 AM

I have some required reading for students, especially those in your position.

#5175111 Checkstyle

Posted by on 20 August 2014 - 02:47 PM

No need to strictly stick to the traditional 80 characters (as derived from when we coded via terminals)

Dude, you are sooo wrong.
It came from when we used punched cards, which only had 80 columns.  I still have boxes of them from when I was in university.  Those babies predate digital computers even.  Noah used them in the bible when he got the 10 commandments, that's why none of them are more than 79 characters (the first column marked the commandment as a comment).

#5174906 Ubuntu + Performance

Posted by on 19 August 2014 - 08:17 PM

Well, those hardware specs are better than mine, and I develop the Unity DE, so I think you'll be fine.


Keep in mind that when you're running in fullscreen mode with Unity you're going directly to the hardware:  compositing is turned off.  Use a lighter DE if you wish: Lubuntu or Xubuntu do not use a compositor, but you won't see much difference in fullscreen mode.


Don't bother looking for the proprietary Intel drivers:  they're a part of Mesa.  Intel open-sources their drivers and contributed them to and maintain them as a part of Mesa.  Mesa is not just software drivers.

#5174431 Event/Action Management

Posted by on 18 August 2014 - 06:46 AM

I've already encountered some pitfalls with this approach, particularly the catches with using function pointers, and void* variables, and was curious to see what other people were doing in this kind of case?

I see from the fact you're using std::map that you're already partially using C++.  You could go a step further and use std::function instead of the C way of raw pointers and void* and avoid most of those catches you've encountered.

#5173191 QT vs. wxWidgets for OpenGL

Posted by on 12 August 2014 - 04:21 PM

Coming back to QT and setting aside the development build setup are people's overall opinion on QT more favorable that wxWidgets?

Consider this: there is an entire Linux desktop environment based on Qt (KDE).  Canonical, the company behind Ubuntu, has dumped GTK as their toolkit of choice and is developing their new multi-platform desktop using Qt.  There is no desktop using WxWidgets.


WxWidgets is pretty close to a free clone of Microsoft's MFC.  Microsoft recommends against the use of MFC and has done so for quite some years now. 

#5173188 What's the industry like?

Posted by on 12 August 2014 - 04:15 PM

I live in Ontario, Canada - What's the industry like here?

Just a FYI, but Ottawa has a burgeoning gamedev industry, and local university Carleton offers a degreed gamedev program (not a coincidence).  It may be possible to both work in the industry while gaining specialized formal education there.

#5173078 Can't solve problems without googling?

Posted by on 12 August 2014 - 07:23 AM

A tourist had just arrived in Manhattan with tickets to a concert and didn't know his way around.  The curtain time for the show was fast approaching and the tourist was becoming increasing desperate to find the venue.


Finally, he approached a man walking swiftly and carrying a violin case, a sign he might have knowledge of the local entertainment industry.


"Excuse me," said the tourist, and the hurrying man looked up.


"Can you tell me how to get to Carnegie Hall?" queried the lost and desperate man.


The man with the violin case paused briefly and stared intensely at the tourist.  After a beat, he spat "Practice!!!" and hurried on his way.


To get good at something, you practise.  It's that simple.  Learning and repeating what others have done is one of the more effective ways of practising.

#5168624 C++ files organisation

Posted by on 23 July 2014 - 05:32 AM

I work with oodles of free software projects.  I've seen plenty with a separate includes/ directory and many that have them combined.  I've seen many with a deep or broad hierarchy and the same with everything dumped into a single subdirectory.  Technically it makes no difference and there is no de facto or de jure standard, it's entirely up to the taste of the most vocal or dominant developer.
Since packaging project source inevitably means installing into a staging directory, that isn't relevant.  Since installing means copying out of the sources into an installation directory, that's not relevant.
What is relevant is when someone comes along to try and read and understand the code, it's a lot easier when there isn't a separate include/ directory in the project, and the header and other sources files are all combined in one hierarchy.  I've noticed the most vocal proponents of the separate include/ hierarchy tend to be those who spend no time maintaining other people's code.  There is also no argument that in larger projects readability is improved by namespacing components into separate subdirectories and all include references are relative to the top of the hierarchy.  If each component produces a static convenience library (or, if required, a shared library) that also makes your unit testing easier.

#5166313 hyphotetical raw gpu programming

Posted by on 11 July 2014 - 04:37 PM

You guys are wayyy over my head with this stuff.  I'm kinda with fir on this; I only have a vague notion of what a GPU does, but I figure it's like he says, a vast array of memory as data input, a similar vast array as output, and a set of processors that read and process instructions from yet another array of memory to transform the input to the output.  Is that not the case?


Do all the processing units always work in lock-step or can they be divided into subgroups each processing a different program on different input sets?


Is there a separate processor that divides up the data and feeds it or controls the main array of processors as appropriate?


I mean, I can describe how a traditional CPU works down to the NAND gate level (and possibly further), but I'd be interested in learning about GPU internals more.

#5165792 relocation unknown

Posted by on 09 July 2014 - 06:33 AM

(1)- branching is usually relative so its reallocable, acces to local variables are also relative so such kind of procedure do not need realocation fixup table

Right, this is also known as position independent code (PIC).  Not all CPUs support PIC (they do not have a register-indexed memory read instruction), but the ones targeted by Microsoft do.  Some CPUs support only PIC.  It's a wide world and you can compile C code to almost anything in it.
PIC requires the dedicated use of one or more registers.  The old Macintosh binary format (pre-OS X) dedicated the 68k's a4 for locals and a5 for globals.  The Intel architecture has a critical shortage of registers, so compilers tend to not use PIC for globals and locals use the already-dedicated SP register.

(2) I suspect (becouse im not sure as to this) that code 1) that uses calls 2) that references to global data - need fixups
as i suspect calls are not relative, also global data acces is not relative

Yes.  Well, there's a bit of confusion here.  External references that are not relative and are going to need some kind of resolution before runtime.  It's possible to have non-external globals that are not relative, and can use absolute addresses.  It's a little more complicated than that if you're doing partial linking (eg. separate compilation).

(3) when wathing some disasembly of .o .obj files etc and stuf i never see relocation tables listed - Do most such of this object files has such fixup tables build-in or no? is there a way of listing them or something?

Depending on your tool, you may need to request the relocation tables be dumped explicitly.
If you were on Linux using the ELF format, running 'readelf -aW' on a .o, .so, or binary executable would reveal much.  Pipe it through a pager.

(4) if i understand correctly if some .obj module references 17 symbols

(I think thay may be both external or internal, (say 17 = for example 7 external function, 7 internal functions, 2 internal static data, 1 external static data ) it would be easiest to define 17 pointers and do not do a rewritting of each move that references do data and each call that calls the function but only fill up those pointers at link time and do indirectional (thru the pointer) addressings (?)

but loaders prefer to physically rewrite each immediate reference for efficiency reasons?

The smaller the relocation table at load time, the faster loads will be.
Your static linker will do its best to resolve symbols at static link time.  If it can't, the symbol goes into the relocation table for resolution at load time.  Depending on settings, the symbols get resolved immediately or when needed (lazy loading).
Different binary standards resolve symbols in different ways.  An ELF file (eg. Linux) has a global lookup table that gets loaded and patched as required.  A COFF file (eg. Windows) has a jump table that gets statically linked and backpatched at load time (the .LIB file for a .DLL). A MACH-O file (eg. Mac OS X) behaves much like an ELF file, but in an incompatible way.


Some required reading for someone trying to understand this is the seminal paper by Ulrich Drepper on shared libraries.  It's a little bit Linux-specific but many of the concepts can be generalized, and I think it's just the sort of thing you might be looking for.  If not, it's still an interesting read.

#5165788 c++ function pointers

Posted by on 09 July 2014 - 06:04 AM

I usually avoid them for readability.

That's odd.  They were explicitly developed for and I use them for enhanced readability.


Then again, I second the use of std::function.

#5165522 raw binary to obj conversion

Posted by on 08 July 2014 - 05:42 AM

It's not entirely on topic but I'm going to suggest you seriously consider adopting GNU/Linux as your development platform of choice.  It's a toolchain hacker's dream, documentation on the tools is generally at hand, and it uses openly-sourced tools for pretty much everything so you can either read the source to see how they work or read the detailed specificaction to see what they're supposed to do.


You'll find MinGW is a reduced copy of the GNU environment you usually get by default on Linux, so you're probably already familiar with much of how things work.  You can also bring much of what you learn back to MinGW to apply to Windows applications.


Trust me, you're going to make the switch sooner or later.


Linux uses the Extensible Link Format (ELF) for its binary objects (object files, shared libraries, executable binaries).  If you learn and understand the basics of that, it will go a long way toward understanding what you would need to do to convert other binary formats (.bin, COFF, etc).

#5164594 Installer for Linux?

Posted by on 03 July 2014 - 11:00 AM

Delivering deb and rpm packages provides the most native experience. However on personal experience, maintaining these packages is a lot of work (i.e. you need to check it still works with each new Ubuntu/Debian release; sometimes it breaks or complains of something new; or some dependency that is still there but changed its version numbering system and now can't install, or forces to download 1GB worth of updates to install your software, etc etc etc) which is the reason you always have to provide a zip file just in case the deb/rpm didn't work.

If your software no longer works on an up-to-date system, it's better to find out and fix it early on.  Or, you can not do it and just leave it completely broken for end users.  The release of new distro versions is never a surprise and the dates are almost always known 6 months in advance, and the prerelease versions are available for many months in advance just so you can update your stuff.  In the case of a commercial tool , you may find you want to update that often anyway.
Generally dependency version break their scheme because they have an ABI break.  In that case, you probably want to update your packages to use the new ABI, so it's a good thing.  Most important packages will also provide a coinstallable backwards-compatible version to ease the transition.
If you need to download 1 GB worth of dependencies for your software if it's properly packaged using the native package manager, then your ZIP file is going to be at least that big as well.  There is no shortcut.