Jump to content

  • Log In with Google      Sign In   
  • Create Account

Bregma

Member Since 09 Dec 2005
Offline Last Active Today, 05:30 AM

#5163807 How do multyple people write code for one project?

Posted by Bregma on 30 June 2014 - 06:13 AM

(1) if you have one or more developers on a project, always use a revision control system (VCS).  Developed as we entered the Space Age in the 1970s, this technology has been shown to both facilitate code sharing in a multi-user environment and act as an aide to escrow for contractual obligations, but have saved the bacon when Mr. Murphy stops by for a chat.  Please note the "one or more."  The only people who have regrets about a VCS are those who didn't use one.

 

(2) You will want to host the VCS on a commonly-accessible network node and provide the occasional backup of that node.  It's especially important that the node be commonly-accessible for teams of more than 1.  The advantage of using a separate node even for teams of one is the elimination of a single point of failure in your design, and the advantage for larger teams should be self-evident.

 

(3) The easiest way to set up a locally-managed VCS service is to use one of the modern distributed revision control systems (DVCS).  The tools git, mercurial, and bazaar are the most popular DVCS available, and all are fairly straightforward to set up as a service.

 

(4) Using a third-party DVCS service is even easier than maintaining your own.  Such services generally have easy set up, provide regular back ups, and often offer other services for team development, such as inline code reviews and publication (source code release downloads, wiki pages, etc).  If you absolutely need privacy, there are commercial DVCS and setting up an in-house or private hosted service is not difficult.

 

(5) most DVCS provide a simple way to tag and/or pull a particular "snapshot" of the code as it exists at a particular moment (older nonshared revision control systems like SCCS, RCS, SVN, etc either do not provide that or have very clunky methods for doing so).  This is important for advanced processes such as releasing software, bug tracking, QA, and so forth.

 

In short, you should use a DVCS such as git, mercurial (hg) or bazaar (bzr) to keep and share your code.  You might consider using a third-party hosting service to make it easier and provide an automatic off-site backup of your most precious asset.




#5163422 speeding this with sse or sse intrinsics

Posted by Bregma on 28 June 2014 - 06:05 AM

The GCC documentation has some very useful examples.




#5162436 question about linking

Posted by Bregma on 23 June 2014 - 06:02 PM

c standard is not saying something? or some common rule - that it is linked in or not?

"Linking" is not a concept addressed by the language standard, no.  There is no requirement in the language standard that a system offer separate compilation of modules, and indeed there are embedded systems that do not.

 

Practically, though, most modern (post-1960s at least) linkers will normally only satisfy undefined symbols from a static archive (library).  Dynamic shared objects (DLLs, .so files, .dylibs and so on) are loaded by the dynamic link-loader in their entirety, just as an executable is, but their symbol relocation tables may not be resolve until required (so-called "lazy" loading).  Command-line options can be used to vary that behaviour (eg. --Wl,-whole-archive passed to GCC).

 

Symbols from object modules may also have unreferenced symbols stripped.  That's going to depend on your linker and likely on the options passed o the linker.




#5162082 -march=pentium3 -mtune=generic -mfpmath=both ?

Posted by Bregma on 22 June 2014 - 07:37 AM

1) that -march says what instruction set i should restrict compiler to use (for example setting -march=pentium3 makes my binary onlu with instructions awaliable on pentium3)

-march sets the minimum compatibility level... in this case it means Pentium III or later.


2) also i understand that -mtune says to what target the previous instructions do optymize, for example i can get p3 instructions and optymize it for core2
 
confusingly the docs say
 
"-march=cpu-type Generate instructions for the machine type cpu-type. The choices for cpu-type are the same as for -mtune. Moreover, specifying -march=cpu-type implies -mtune=cpu-type. "
 
I doubt if this is true - does this mean that when choicing -march=pentium3 -mtune=generic the mtune setting is discarded and this is equiwalent of 
-march=pentium3 -mtune=pentium3 ? dont think so (this is confusing)

Why do you doubt it? It makes perfect sense: -march has priority. If you choose to set the minimum compatibility level, the optimizer will use that as when making choices.

1. i would like to chose resonable codeset that would be working on older
machines but also working ok on more modern ones  I chose -march=pentium3 as i  doubt if someone uses something older than p3 and I didnt noticed noticable change when putting something newer here (like -march=core2 - i dint notice any speedup)

While there are millions of pre-PIII machines still going into production, it's unlikely that your game will be running on them (they're things like disk controllers, routers, refrigerators, toasters, and so on). PIII is probabyl good enough, since it has PAE by default and other improvements like fast DIV, better interlocking, and extended prefetch.

It's also likely that newer architectures don't introduce new abilities that your picooptimization can take advantage of when it comes to something not CPU-bound, like a game.

2. what in general i can yet add to this commandline to speed things up ?
(or throw away some runtime or exception stuff bytes or something like that)

In general, such picooptimization is not going to make one whit of difference in a typical game. What you really need to do is hand-tune some very specific targeted benchmark programs so they show significant difference between the settings (by not really running the same code), like the magazines and websites do when they're trying to sell you something.

im using here -O2 as i not noticed difference with -O3

Hardly surprising, since most picooptimizations don't provide much noticeable difference in non-CPU-bound code. -O2 is likely good enough (and definitely better than -O1 or -O0), but -O3 has been known to introduce bad code from time to time, I always stay away from it.

i noticed that "-mfpmath=both " speeded things up (though docs say something that its dangerous didnt understand why) also (-ffast-math /
-funsafe-math-optimizations also speeded things)

Those switches end up altering the floating-point results. You may lose accuracy, and some results may vary from IEEE standards in their higher-order significant digits. If you're doing a lot of repeated floating-point calculations in which such error can propagate quickly, you will not want to choose those options. For the purposes of most games, they're probably OK. Don't enable them when calculating missile trajectories for real-life nuclear warheads. Don't forget GCC has other uses with much stricter requirements than casual game development.

I'd say that while it's fun to play with the GCC command-line options and it's a good idea to understand them, they're not really going to give you a lot of optimization oomph. You will get far more bang for your buck playing with algorithms and structuring your code and data to take advantage of on-core caching.

Also, if you haven't already, you might want to read about the GCC internalsto understand more of what's going on under the hood.




#5161151 Re-learning C++ and some help with learning it.

Posted by Bregma on 17 June 2014 - 02:21 PM

I feel the same way about playing the piano. I would really love to be able to tickle those ivories like a pro and every time I walk by it I feel a little guilty. I just hate not being able to play really well and I have ideas for some really good music, but I hate the learning and practice.

Is there an easier way to get to Carnegie Hall?


#5160350 using static initialisation for paralelization?

Posted by Bregma on 13 June 2014 - 01:10 PM

anyway this is a pitfal trap for me putting something slowing and bloating my program implicitely 
 
there should be a large text * * * WARNING POSSIBLE CODE SLOWDOWN (reasone here) * * *

 
 Yes, it sort of goes against the C++ philosophy of "pay only for what you use."  Could be argued, however, that you're using function-local static variables so you're paying the price.  That argument is getting kind of sketchy, though, because it can be countered with "but I'm not using multiple threads, so why should I pay the price?"

Be aware of letting a committee near anything, even for a minute.
 

i neverd heard the 'serialization' word in such sense (serialisation usually was meant saving some kind of data to disk), though this meaning is quite usable

Yes, I've run into that before. A lot of people use 'serialization' to mean streaming data, a synonym for 'marshalling'. I understand Java used that in its docs and it took off from there. Perhaps it originated from the act of sending data over a serial port (RS-232C) although we always used the term 'transmit' for that (and 'write to disk' for saving to disk, maybe 'save in text format' to be more explicit).

I'm using 'serialization' in its original meaning: enforce the serial operation of something that could potentially be performed in parallel or in simultaneous order. The usage predates the Java language and so do I. I apologize for the confusion. If any can suggest a better term, I'm open to suggestions.


#5160288 using static initialisation for paralelization?

Posted by Bregma on 13 June 2014 - 07:43 AM


how is the mechanizm it slowed my program and growed it +50KB

I suspect, without proof, that it pulled in some library code to wrap and serialize the initialization of the function-local statics, and the slowdown is because that code gets executed.  Thread serialization generally requires context switches and pipeline stalls.  Without knowing the code, I suspect that code path needs to be executed every time so it can check to see if the object has been initialized.




#5160269 using static initialisation for paralelization?

Posted by Bregma on 13 June 2014 - 05:12 AM

No.  It did it so if you are initializing static-local functions in a multi-threaded environment, it will have defined results.

 

It certainly does not affect namespace-level static initialization, not does it imply anything about the introduction of a threaded environment during application initialization.




#5159569 Step by step ARM emulator

Posted by Bregma on 10 June 2014 - 01:05 PM

Perhaps a combination of QEmu and GDB might work?




#5159330 remapping the keyboard (esp {} () )

Posted by Bregma on 09 June 2014 - 02:05 PM


Its okay for me, i strongly do not want tu use all fingers,
the problem is i do not like move hands i just like move fingers

Well, I can't help at all because I'm not familiar with Microsoft Windows, but I wish you luck.  It's important that you bend technology to your needs, not bend to the needs of technology. There's no doubt that keyboard technology was invented purely to meet the needs of 19th century technology.  Go and invent a better keyboard.




#5157979 what is most difficult in programming to you?

Posted by Bregma on 03 June 2014 - 07:03 PM

Trying to decide between two (or sometimes more) relatively equally good alternatives, usually for minor details.  The result is frequent decision paralysis.




#5157909 will make help me?

Posted by Bregma on 03 June 2014 - 12:43 PM


- scan given folder tree find all .o files and flush this with path into some linker bat file

That required the object files be built first.

 

I strongly suggest you try to learn something like CMake.  It allows you to specify only the primary dependencies (.c or .cpp files) and the target binaries (.exe files) and the rest is magic.  In your case, you would use it to generate makefiles which you would run with make in the mingw environment.  You can even write a custom rule to generate to metamega header file.




#5157904 will make help me?

Posted by Bregma on 03 June 2014 - 12:37 PM


An IDE will do it all automatically for you

If, by 'automatic' you mean manually constructing the primary dependencies.  Dragging and dropping pictures of words in a picture of a directory hierarchy instead of editing a text file.  It's really the same amount of work, just a different medium.

 

Both are also still much less work than the way OP appears to be doing stuff at present.




#5157854 will make help me?

Posted by Bregma on 03 June 2014 - 10:04 AM

No, make will not help you here.

 

The purpose of make is to rebuild only what has changed, and to make sure everything that depends on a change gets rebuilt (rebuild everything that's necessary, but no more).  You still need to specify dependencies somehow.  Much of the point of make is defeated if you have One Big Header that includes all other headers.

 

If you're using GNU make and the GNU compiler collection, you can use extensions that will calculate compile-time dependencies, eliminating much of the work:  that's how wrappers like the Linux kconfig system and the GNU autotools work:  you simply specifiy the translation units (generally .c or .cpp files) and the rest gets determined at build time.  Other wrappers like CMake reinvent that using their own separate codebase.  These will still give you build-time grief because of your "simplified" headers, but will be less work for you to maintain the dependencies.

 

You might also consider breaking your projects down into modules using static libraries, so each only needs to be rebuilt when the module changes and the public API is limited to the public library API:  this should simplify your link command lines and header structure in a good way, making any tight coupling manifest.

 

So, make on its own will not help you here, but using make in conjunction with wrapper tools developed over the last half century or so to solve your problems will definitely help solve your problems.  It's definitely worth your looking in to.




#5154802 Location of data in the class

Posted by Bregma on 20 May 2014 - 07:01 AM


was also curious if the same could be said for functions arguments,

Vector4(float _x, float _y, float _z, float _w);

Would all these be in the same order and sequential?

Definitely not.  Most early compilers for PDP and related architectures (VAX, PPC) always pushed arguments on the stack in reverse order.  SPARCs and IBM mainframes would always pass 4 float args like in the example above in registers.  I believe many currently-popular compilers will push the args on the stack in-order unless there are free registers, in which case one or more are passed in registers, depending on the optimization settings and link visibility.  Of course, the compile could choose to inline the entire function, in which case there are no parameters.

 

So, no, never ever depend on the arguments to a function being in the same order and sequential in memory.






PARTNERS