Jump to content

  • Log In with Google      Sign In   
  • Create Account


Bregma

Member Since 09 Dec 2005
Offline Last Active Today, 05:32 AM

#5175456 Are degrees worth it for Level / Environment design areas?

Posted by Bregma on 22 August 2014 - 07:03 AM

I have some required reading for students, especially those in your position.




#5175111 Checkstyle

Posted by Bregma on 20 August 2014 - 02:47 PM

No need to strictly stick to the traditional 80 characters (as derived from when we coded via terminals)

Dude, you are sooo wrong.
 
It came from when we used punched cards, which only had 80 columns.  I still have boxes of them from when I was in university.  Those babies predate digital computers even.  Noah used them in the bible when he got the 10 commandments, that's why none of them are more than 79 characters (the first column marked the commandment as a comment).




#5174906 Ubuntu + Performance

Posted by Bregma on 19 August 2014 - 08:17 PM

Well, those hardware specs are better than mine, and I develop the Unity DE, so I think you'll be fine.

 

Keep in mind that when you're running in fullscreen mode with Unity you're going directly to the hardware:  compositing is turned off.  Use a lighter DE if you wish: Lubuntu or Xubuntu do not use a compositor, but you won't see much difference in fullscreen mode.

 

Don't bother looking for the proprietary Intel drivers:  they're a part of Mesa.  Intel open-sources their drivers and contributed them to and maintain them as a part of Mesa.  Mesa is not just software drivers.




#5174431 Event/Action Management

Posted by Bregma on 18 August 2014 - 06:46 AM


I've already encountered some pitfalls with this approach, particularly the catches with using function pointers, and void* variables, and was curious to see what other people were doing in this kind of case?

I see from the fact you're using std::map that you're already partially using C++.  You could go a step further and use std::function instead of the C way of raw pointers and void* and avoid most of those catches you've encountered.




#5173191 QT vs. wxWidgets for OpenGL

Posted by Bregma on 12 August 2014 - 04:21 PM


Coming back to QT and setting aside the development build setup are people's overall opinion on QT more favorable that wxWidgets?

Consider this: there is an entire Linux desktop environment based on Qt (KDE).  Canonical, the company behind Ubuntu, has dumped GTK as their toolkit of choice and is developing their new multi-platform desktop using Qt.  There is no desktop using WxWidgets.

 

WxWidgets is pretty close to a free clone of Microsoft's MFC.  Microsoft recommends against the use of MFC and has done so for quite some years now. 




#5173188 What's the industry like?

Posted by Bregma on 12 August 2014 - 04:15 PM


I live in Ontario, Canada - What's the industry like here?

Just a FYI, but Ottawa has a burgeoning gamedev industry, and local university Carleton offers a degreed gamedev program (not a coincidence).  It may be possible to both work in the industry while gaining specialized formal education there.




#5173078 Can't solve problems without googling?

Posted by Bregma on 12 August 2014 - 07:23 AM

A tourist had just arrived in Manhattan with tickets to a concert and didn't know his way around.  The curtain time for the show was fast approaching and the tourist was becoming increasing desperate to find the venue.

 

Finally, he approached a man walking swiftly and carrying a violin case, a sign he might have knowledge of the local entertainment industry.

 

"Excuse me," said the tourist, and the hurrying man looked up.

 

"Can you tell me how to get to Carnegie Hall?" queried the lost and desperate man.

 

The man with the violin case paused briefly and stared intensely at the tourist.  After a beat, he spat "Practice!!!" and hurried on his way.

 

To get good at something, you practise.  It's that simple.  Learning and repeating what others have done is one of the more effective ways of practising.




#5168624 C++ files organisation

Posted by Bregma on 23 July 2014 - 05:32 AM

I work with oodles of free software projects.  I've seen plenty with a separate includes/ directory and many that have them combined.  I've seen many with a deep or broad hierarchy and the same with everything dumped into a single subdirectory.  Technically it makes no difference and there is no de facto or de jure standard, it's entirely up to the taste of the most vocal or dominant developer.
 
Since packaging project source inevitably means installing into a staging directory, that isn't relevant.  Since installing means copying out of the sources into an installation directory, that's not relevant.
 
What is relevant is when someone comes along to try and read and understand the code, it's a lot easier when there isn't a separate include/ directory in the project, and the header and other sources files are all combined in one hierarchy.  I've noticed the most vocal proponents of the separate include/ hierarchy tend to be those who spend no time maintaining other people's code.  There is also no argument that in larger projects readability is improved by namespacing components into separate subdirectories and all include references are relative to the top of the hierarchy.  If each component produces a static convenience library (or, if required, a shared library) that also makes your unit testing easier.




#5166313 hyphotetical raw gpu programming

Posted by Bregma on 11 July 2014 - 04:37 PM

You guys are wayyy over my head with this stuff.  I'm kinda with fir on this; I only have a vague notion of what a GPU does, but I figure it's like he says, a vast array of memory as data input, a similar vast array as output, and a set of processors that read and process instructions from yet another array of memory to transform the input to the output.  Is that not the case?

 

Do all the processing units always work in lock-step or can they be divided into subgroups each processing a different program on different input sets?

 

Is there a separate processor that divides up the data and feeds it or controls the main array of processors as appropriate?

 

I mean, I can describe how a traditional CPU works down to the NAND gate level (and possibly further), but I'd be interested in learning about GPU internals more.




#5165792 relocation unknown

Posted by Bregma on 09 July 2014 - 06:33 AM

(1)- branching is usually relative so its reallocable, acces to local variables are also relative so such kind of procedure do not need realocation fixup table

Right, this is also known as position independent code (PIC).  Not all CPUs support PIC (they do not have a register-indexed memory read instruction), but the ones targeted by Microsoft do.  Some CPUs support only PIC.  It's a wide world and you can compile C code to almost anything in it.
 
PIC requires the dedicated use of one or more registers.  The old Macintosh binary format (pre-OS X) dedicated the 68k's a4 for locals and a5 for globals.  The Intel architecture has a critical shortage of registers, so compilers tend to not use PIC for globals and locals use the already-dedicated SP register.

(2) I suspect (becouse im not sure as to this) that code 1) that uses calls 2) that references to global data - need fixups
as i suspect calls are not relative, also global data acces is not relative

Yes.  Well, there's a bit of confusion here.  External references that are not relative and are going to need some kind of resolution before runtime.  It's possible to have non-external globals that are not relative, and can use absolute addresses.  It's a little more complicated than that if you're doing partial linking (eg. separate compilation).

(3) when wathing some disasembly of .o .obj files etc and stuf i never see relocation tables listed - Do most such of this object files has such fixup tables build-in or no? is there a way of listing them or something?

Depending on your tool, you may need to request the relocation tables be dumped explicitly.
 
If you were on Linux using the ELF format, running 'readelf -aW' on a .o, .so, or binary executable would reveal much.  Pipe it through a pager.

(4) if i understand correctly if some .obj module references 17 symbols

(I think thay may be both external or internal, (say 17 = for example 7 external function, 7 internal functions, 2 internal static data, 1 external static data ) it would be easiest to define 17 pointers and do not do a rewritting of each move that references do data and each call that calls the function but only fill up those pointers at link time and do indirectional (thru the pointer) addressings (?)

but loaders prefer to physically rewrite each immediate reference for efficiency reasons?

The smaller the relocation table at load time, the faster loads will be.
 
Your static linker will do its best to resolve symbols at static link time.  If it can't, the symbol goes into the relocation table for resolution at load time.  Depending on settings, the symbols get resolved immediately or when needed (lazy loading).
 
Different binary standards resolve symbols in different ways.  An ELF file (eg. Linux) has a global lookup table that gets loaded and patched as required.  A COFF file (eg. Windows) has a jump table that gets statically linked and backpatched at load time (the .LIB file for a .DLL). A MACH-O file (eg. Mac OS X) behaves much like an ELF file, but in an incompatible way.

 

Some required reading for someone trying to understand this is the seminal paper by Ulrich Drepper on shared libraries.  It's a little bit Linux-specific but many of the concepts can be generalized, and I think it's just the sort of thing you might be looking for.  If not, it's still an interesting read.




#5165788 c++ function pointers

Posted by Bregma on 09 July 2014 - 06:04 AM


I usually avoid them for readability.

That's odd.  They were explicitly developed for and I use them for enhanced readability.

 

Then again, I second the use of std::function.




#5165522 raw binary to obj conversion

Posted by Bregma on 08 July 2014 - 05:42 AM

It's not entirely on topic but I'm going to suggest you seriously consider adopting GNU/Linux as your development platform of choice.  It's a toolchain hacker's dream, documentation on the tools is generally at hand, and it uses openly-sourced tools for pretty much everything so you can either read the source to see how they work or read the detailed specificaction to see what they're supposed to do.

 

You'll find MinGW is a reduced copy of the GNU environment you usually get by default on Linux, so you're probably already familiar with much of how things work.  You can also bring much of what you learn back to MinGW to apply to Windows applications.

 

Trust me, you're going to make the switch sooner or later.

 

Linux uses the Extensible Link Format (ELF) for its binary objects (object files, shared libraries, executable binaries).  If you learn and understand the basics of that, it will go a long way toward understanding what you would need to do to convert other binary formats (.bin, COFF, etc).




#5164594 Installer for Linux?

Posted by Bregma on 03 July 2014 - 11:00 AM

Delivering deb and rpm packages provides the most native experience. However on personal experience, maintaining these packages is a lot of work (i.e. you need to check it still works with each new Ubuntu/Debian release; sometimes it breaks or complains of something new; or some dependency that is still there but changed its version numbering system and now can't install, or forces to download 1GB worth of updates to install your software, etc etc etc) which is the reason you always have to provide a zip file just in case the deb/rpm didn't work.

If your software no longer works on an up-to-date system, it's better to find out and fix it early on.  Or, you can not do it and just leave it completely broken for end users.  The release of new distro versions is never a surprise and the dates are almost always known 6 months in advance, and the prerelease versions are available for many months in advance just so you can update your stuff.  In the case of a commercial tool , you may find you want to update that often anyway.
 
Generally dependency version break their scheme because they have an ABI break.  In that case, you probably want to update your packages to use the new ABI, so it's a good thing.  Most important packages will also provide a coinstallable backwards-compatible version to ease the transition.
 
If you need to download 1 GB worth of dependencies for your software if it's properly packaged using the native package manager, then your ZIP file is going to be at least that big as well.  There is no shortcut.




#5164535 Installer for Linux?

Posted by Bregma on 03 July 2014 - 05:09 AM

Installing into /opt currently seems like a rather reasonable approach since we currently need all files in the same directory. nixstaller seems pretty nice, but it bothers me somewhat that it hasn't really been updated since 2009. It also borders on the issue Bregma brought up, seems a bit like the Windows way of doing stuff.

It is the Windows way of doing stuff, but most people won't care.  The purists will, but they'll either reject your product because it's not Free or they'll be really vehemently vocal in forums on the internet but can be safely ignored.  The rest just want to install and use your software and don't care about such issues.

 

The real danger/disadvantage to having a complete bundle is (1) you will have to take care of your own security updates -- only really an issue if you have networked functionality or protect user assets in some way, and (b) if you have any external dependencies at all (eg. C or other language runtimes) your product could break on the next system upgrade.  Unlike Windows, most Linux distros upgrade on a regular basis.

 

It really sounds like the single bundle is your best option.  It does work well for Steam, although they have their own installer and automated update mechanism to alleviate the problems I outlined above.  Have you considered going through Steam?




#5164299 Installer for Linux?

Posted by Bregma on 02 July 2014 - 06:41 AM

What is the best way to release a closed source commercial app for Linux? What about adding entries to their application launcher of choice (ie kickstart menu on KDE or whatever)?

Firstly, don't try to force a Windows-centric view of software distribution onto a non_window platform.  That's like a book publisher finding they have a monster hit of English literature and look to expand their market to South America and ask how they can get all those people to learn to read English so they will buy the book.  It tends to be the wrong question.
 
Part of the problem stems from the misbelief that Linux is an operating system.  It's not, it's a OS kernel.  The think people think of as 'Linux' is actually the distribution like Red Hat, Fedora, Debian, Ubuntu, SuSE, and myriad others:  they're the OSes.  You don't target to 'Linux' you target to a distribution.  Few distributions are truly compatible at the package or binary level.  Following the book simile, the publisher now wants to translate the work into South American to improve sales.  The next step will be to complain that there should a central authority in South America to dictate what language people speak and that it should look and sound like English, just like Jesus used in the Bible.
 
Anyway, there are two approaches to commercial distribution on Linux.

 

(1) Use the native package manager for your target distribution.  You'll want to specify a limited set of distributions you target, and make debs/rpms available.  Users download and install the software using their native package manager, and your software uses the local version of dependent libraries.  The package managers take care of making sure dependencies are installed and configured correctly.  You still need to test on all your supported platforms.  This is generally the preferred method.

 

(2) Use the Macintosh approach of creating a standalone bundle of everything you need (all binaries, executables, assets, shared libraries, the works).  You will still need to specify a limited set of distributions you target and make the bundles available for download.  You will probably need to write a bespoke installer, or at least a custom wrapper, and integrate it into the OS.  You will still need to test on all your supported platforms.  This is the method Steam uses for its games. 

 

You should follow the freedesktop.org standards for integrating your stuff into the launch system(s).  See ColinDuquesnoy's reply above.






PARTNERS