Jump to content

  • Log In with Google      Sign In   
  • Create Account


will make help me?

  • You cannot reply to this topic
17 replies to this topic

#1 fir   Members   -  Reputation: -444

Like
0Likes
Like

Posted 03 June 2014 - 07:10 AM

I am writing c/winapi codes (mingw no ide just simple editor and commandline compiler)

 

 Recently i esteblished maybe a bit unique (not sure) 'project system' I divided my source on very small modules (each is separately compiled to own .o  file) then I link it together to final exe

 

Those modules are contained in large folder tree, each module source has its own folder so I got a large number of folders (say 400) containing small modules - it is also not flat set of folder but tree containing about four to five levels of depth (subfolders in subfolders )

 

it is working but i would like to make it maybe a bit of more comfortable,

there are maybe a two mayor uncomfortabilities here

 

1) header managment

 

each module exports of course its ow header

 

for simplicity i made one big header in the root of folder tree that includes all the small header down the folder tree (so i got one summa header having 400 lines of includes)

 

on the opposite each module includes that summaric folder - there is a little inconvenience becouse i must write something like

 

 

c:\mingw\bin\g++ -O3 -w -c phys_scanline.c -fno-rtti -fno-exceptions -I ..\..\..\
 

 

this "-I ..\..\..\" is needed for reach this main summaric header up the folder tree

 

2) object managment 

 

like the same trouble i got with managing .o for linker,

all my .o lays in my big folder tree, so in command line for linker i must build a very long list (I said it is close to 400 modules (not all already on, because im revriting my old framweork)) of the paths to this obj files 

 

Managing of this 400 headers references and 400 obj references is burdensome [adding one module may be not so much work but as i said those modules are small and often added, also refactoring, esp changing the folder structure is then extremally burdensome] and consumes an important amount of energy (on the other side this "multilevel of folders type of project" has some advantages (it is 

well ordered, graphics modules are in one folder, window modules in another ) and finaly i dont know if i could hold the tree structure of the project minimalising the large amount of work managing all the references - and maybe to make it more clear yet (I need maximum possible clarity in all sources and tied managment scripts and binaries,  )

 

Does has somebody some idea how i can improve this? Will some make maybe help with that (now i just use 400 .bat files for compiling each module , and one root .bat to link everything  bats are not so burdensome, more burdensome is editing this summaric header file and the summaric linker commandline)

 

 

 

 



Sponsor:

#2 DiegoSLTS   Members   -  Reputation: 808

Like
1Likes
Like

Posted 03 June 2014 - 07:46 AM

I have no idea what made you do that thing with folders and subfolders, it sounds like it has much more cons than pros.

 

I'm not sure if you're talking about makefiles and the "make" command, but yes, a makefile will help you, you can define some recursive rules to just compile and link everything, but maybe with your approach and those hundreds of .bat files it's kind of risky to make such a big change wacko.png.

 

About those header files with 400 #include lines... I don't know what can help you with that, it sounds like a bad idea to me and I'd have avoided that from the begining.


Edited by DiegoSLTS, 03 June 2014 - 07:50 AM.


#3 Bacterius   Crossbones+   -  Reputation: 8134

Like
1Likes
Like

Posted 03 June 2014 - 08:42 AM


for simplicity i made one big header in the root of folder tree that includes all the small header down the folder tree (so i got one summa header having 400 lines of includes)

 

Are you sure this really simplifies things? I have found common headers like these are a mistake, as they obscure the relationships between modules, and also happen to increase build times by a lot. I would strongly recommend the traditional approach of including only what your module depends on. That way you can even feed your code to e.g. doxygen or some other tool and it will generate a dependency graph which will show which modules depend on which, aiding in refactoring and design.

 

And maybe try to reduce the number of "modules"? I have a hard time believing you have 400 pieces of independent functionality in your code that can't be logically grouped in some fashion. A common separation is math/graphics/window/physics/input/network for games, but that's not the only one possible of course. But 400? Come on, not every function needs its own module.

 

 

 


on the opposite each module includes that summaric folder - there is a little inconvenience becouse i must write something like
 
 
c:\mingw\bin\g++ -O3 -w -c phys_scanline.c -fno-rtti -fno-exceptions -I ..\..\..\
 
 
this "-I ..\..\..\" is needed for reach this main summaric header up the folder tree

 

This is kind of unusual - typically your build system will keep track of the root source directory and all includes are done relative to it, this is best especially if your headers are going to be consumed by others, since they won't be including your headers from their respective directories, so if they all #include other headers relative to their own location in the header tree, well, it isn't going to work. Also, may I suggest compiling C code with an actual C compiler... or if it's (some subset of) C++, mark it as such, people are going to be confused if they find C++ code in a .c file while trying to understand why your project isn't building for them.

 

 

 


like the same trouble i got with managing .o for linker,
all my .o lays in my big folder tree, so in command line for linker i must build a very long list (I said it is close to 400 modules (not all already on, because im revriting my old framweork)) of the paths to this obj files 
 
Managing of this 400 headers references and 400 obj references is burdensome [adding one module may be not so much work but as i said those modules are small and often added, also refactoring, esp changing the folder structure is then extremally burdensome] and consumes an important amount of energy (on the other side this "multilevel of folders type of project" has some advantages (it is 
well ordered, graphics modules are in one folder, window modules in another ) and finaly i dont know if i could hold the tree structure of the project minimalising the large amount of work managing all the references - and maybe to make it more clear yet (I need maximum possible clarity in all sources and tied managment scripts and binaries,  )

 

So don't reference every single source file in the root makefile? Seems to me you aren't managing your dependencies properly, why does the root makefile (or batch file or whatever) need to know that you added a file somewhere deep down in the graphics module? In most projects I've seen and worked on, each module is compiled down to a static library by a separate makefile (for instance, graphics.a, window.a, etc..), with possible dependencies between modules (for instance, graphics.a may depend on math.a) and the root makefile simply links those together with the main program to build the final executable (without any knowledge of every source file involved - just the major modules). Basically, implement the modules as libraries (even if they aren't going to be released separately - developing them separately helps a lot with modularity and is just better practice across the board).

 

In other words, the graphics module knows how to build itself, and the root makefile simply asks the graphics module to do so and then links the result with the other modules. No need to overcomplicate things with 400 batch files smile.png


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#4 fir   Members   -  Reputation: -444

Like
-1Likes
Like

Posted 03 June 2014 - 08:43 AM

I have no idea what made you do that thing with folders and subfolders, it sounds like it has much more cons than pros.

 

I'm not sure if you're talking about makefiles and the "make" command, but yes, a makefile will help you, you can define some recursive rules to just compile and link everything, but maybe with your approach and those hundreds of .bat files it's kind of risky to make such a big change wacko.png.

 

About those header files with 400 #include lines... I don't know what can help you with that, it sounds like a bad idea to me and I'd have avoided that from the begining.

 

revriting/refactoring my "400 modules" system/project is possible though as I say it is burdensome - and for example could take you a 3 days of a bit stressfull work (stressful becouse before you done it it is not working ;/ and this is stressful (more - if you leave it it may be a little hard to recheck what you have to mend ;/) 

 

why i decided to try this tree? tree is a better /more increased 

level of order/organisation /tidy - linear pack of modules can simulate it with some prefixes (and i used it previously) but i decided to give a try to real folder tree

 

the trouble is what i said - the dream it would be working automatic it is  if i would not need to hold this fc*kng hard references in a form of summaric header and summaric linker 

list 

 

if you corious how my link.bat looks like

 

 

set PATH=c:\mingw\bin;
c:\mingw\bin\g++ -O3 -Wl,--subsystem,windows -w                                                              wis\window\change_resolution\change_resolution.o wis\window\resize_window\resize_window.o wis\window\setup_window\setup_window.o wis\window\toggle_fullscreen\toggle_fullscreen.o wis\window\game_pause\game_pause.o wis\service_input\service_keyboard\service_keyboard.o wis\service_input\service_mouse\service_mouse.o wis\service_input\service_rawinput\service_rawinput.o wis\service_input\input_assistant\input_assistant.o wis\service_input\camera_navigator\camera_navigator.o wis\system_info\cpu_info\cpu_info.o wis\system_info\ram_info\ram_info.o wis\system_info\qpc_timer\qpc_timer.o wis\system_info\run_benchmarks\run_benchmarks.o wis\configuration\configuration\configuration.o wis\client_data\blitter\blitter.o wis\client_data\depth_bitmap\depth_bitmap.o wis\client_data\frame_bitmap\frame_bitmap.o wis\client_data\stencil_bitmap\stencil_bitmap.o wis\handle_errors\handle_signals\handle_signals.o wis\handle_errors\handle_errors\handle_errors.o wis\assist_info\hud_assistant\hud_assistant.o wis\assist_info\plot_frame_time\plot_frame_time.o wis\assist_info\window_title\window_title.o gs\frame\frame.o gs\cars\cars.o gs\pixelguerilla\pixelguerilla.o         dris\2d\phys_pixel\phys_pixel.o dris\2d\phys_scanline\phys_scanline.o dris\2d\phys_vertline\phys_vertline.o dris\2d\phys_line\phys_line.o  dris\2d\phys_wire_circle\phys_wire_circle.o dris\2d\phys_wire_rectangle\phys_wire_rectangle.o  dris\2d\phys_fill_circle\phys_fill_circle.o dris\2d\phys_fill_rectangle\phys_fill_rectangle.o  dris\2d\phys_fill_triangle\phys_fill_triangle.o            dris\2d\phys_dot_scanline\phys_dot_scanline.o dris\2d\phys_dot_vertline\phys_dot_vertline.o dris\2d\phys_dot_wire_rectangle\phys_dot_wire_rectangle.o  dris\3d\camera\camera.o  dris\3d\point_3d\point_3d.o       cs\rand\rand.o                                                                                  -s   -fno-rtti -fno-exceptions -lpsapi -lgdi32 -o program.exe 
 
this is only 40 modules but this is only a part of it - whole of my project is nearly 10 times biger
 
this is all burdensome becouse sometimes i like to refactor it as a living organism (names expresses some idea on organisation of my modules etc)
 
I had no hope to dio it totally "hard reference -less" becasuse c do not support this way (a bit of shame) but i would like to make it a bit simpler if possible
 
could you maybe eleborate a bit what can i do with make - presently i do not know make at all
 
I susspect that i could wrote some scripts or c prog to scan my folder tree and produce in full automatic both "summaric header"
and "linker list" (such a thing would greatly help in refactoring works) but this can be considered a bit ugly too (and is some
work at least few hours and probably a couple of times more as I am not to good at it) - but at large refectoring works (renaming branches of my tree and filanames - would be probably of much help)
 
Ps. maybe someone would like to wrote such script (or two scripts one for header and one for .o files ) and made them avaliable so I could try it?

Edited by fir, 03 June 2014 - 08:47 AM.


#5 fir   Members   -  Reputation: -444

Like
0Likes
Like

Posted 03 June 2014 - 09:08 AM

 


on the opposite each module includes that summaric folder - there is a little inconvenience becouse i must write something like
 
 
c:\mingw\bin\g++ -O3 -w -c phys_scanline.c -fno-rtti -fno-exceptions -I ..\..\..\
 
 
this "-I ..\..\..\" is needed for reach this main summaric header up the folder tree

 

This is kind of unusual - typically your build system will keep track of the root source directory and all includes are done relative to it, this is best especially if your headers are going to be consumed by others, since they won't be including your headers from their respective directories, so if they all #include other headers relative to their own location in the header tree, well, it isn't going to work. 

 

What do you mean? Can you say how the build systems do that?

(I did not understood)

 

in my system I probably could define some bat variable containing root of my project like c:\code then use this variable instead of  relative "-I ..\..\..\"  (this "-I ..\..\..\" makes problems in refectoring becouse it can be "-I ..\..\..\" but "-I ..\..\" or "-I ..\..\..\..\"  in another bat) but I distaste such hard references yet more than this relative ones  (If i move my root folder the c:\code2 i then woulfd have to change this path - or is there some way of automatic obtaining it in some building scripts so it would be working?)

 

What do you mean that building systems "keep track" on this - what is the way such systems do?

 

(on the rest i will answer a bit later im going to eat something)

 

ps. if we talking about it is there a way of running my compiling bat

like

 

set PATH=c:\mingw\bin;

c:\mingw\bin\g++ -O3 -w -c frame.c -fno-rtti -fno-exceptions -I ..\..\
 
not to wrote with hand name of the module (here "frame.c")
but to make them take the only one .c file avaliable in the folder and insert it automaticaly here? (this way i culd spare a work of handwritting this in each of 400 compiling bats)

Edited by fir, 03 June 2014 - 09:16 AM.


#6 fir   Members   -  Reputation: -444

Like
0Likes
Like

Posted 03 June 2014 - 09:24 AM

 


for simplicity i made one big header in the root of folder tree that includes all the small header down the folder tree (so i got one summa header having 400 lines of includes)

 

1)

 

Are you sure this really simplifies things? I have found common headers like these are a mistake, as they obscure the relationships between modules, and also happen to increase build times by a lot. I would strongly recommend the traditional approach of including only what your module depends on. That way you can even feed your code to e.g. doxygen or some other tool and it will generate a dependency graph which will show which modules depend on which, aiding in refactoring and design.

 

2)

 

And maybe try to reduce the number of "modules"? I have a hard time believing you have 400 pieces of independent functionality in your code that can't be logically grouped in some fashion. A common separation is math/graphics/window/physics/input/network for games, but that's not the only one possible of course. But 400? Come on, not every function needs its own module.

 

 

 

 

AD1) this is much work - i agree that this is logically better to include only the headers module uses - but it is really much more work esp if you do refactoring (common header reduces it much)

[as to slowdown i do not noticed it - even a big common header with few thousand of declarations shouldnt be noticable imo - probably - someone was saying that it is say 400 files read from disk so it is noticable ? reads may be noticable (though disk cache probably should elimnate it , ?) but compiler processing should be near to zero)

 

AD2) I like to prefer small modules indeed i made them very small (many modules have a couple of functions only 1-5,6,7 functions),

I decided to try this for some reasons



#7 fir   Members   -  Reputation: -444

Like
0Likes
Like

Posted 03 June 2014 - 09:38 AM

So don't reference every single source file in the root makefile? Seems to me you aren't managing your dependencies properly, why does the root makefile (or batch file or whatever) need to know that you added a file somewhere deep down in the graphics module? In most projects I've seen and worked on, each module is compiled down to a static library by a separate makefile (for instance, graphics.a, window.a, etc..), with possible dependencies between modules (for instance, graphics.a may depend on math.a) and the root makefile simply links those together with the main program to build the final executable (without any knowledge of every source file involved - just the major modules). Basically, implement the modules as libraries (even if they aren't going to be released separately - developing them separately helps a lot with modularity and is just better practice across the board).

 

 

this not helps, I was thinking about it then but abandoned it ,

here instead one summaric header with 400 inludes and same for linker you have say 4 headers with 100 includes and same 4 linker scripts 100 modules each - this is zero improvement, the real burden of carrying this 2 (or 3 if counting bats )x 400 paths  stays the same (this 3x400 project names source is a source over a source -long text of module names)


Edited by fir, 03 June 2014 - 09:42 AM.


#8 DiegoSLTS   Members   -  Reputation: 808

Like
1Likes
Like

Posted 03 June 2014 - 09:56 AM

 

I have no idea what made you do that thing with folders and subfolders, it sounds like it has much more cons than pros.

 

I'm not sure if you're talking about makefiles and the "make" command, but yes, a makefile will help you, you can define some recursive rules to just compile and link everything, but maybe with your approach and those hundreds of .bat files it's kind of risky to make such a big change wacko.png.

 

About those header files with 400 #include lines... I don't know what can help you with that, it sounds like a bad idea to me and I'd have avoided that from the begining.

 

could you maybe eleborate a bit what can i do with make - presently i do not know make at all
 
I susspect that i could wrote some scripts or c prog to scan my folder tree and produce in full automatic both "summaric header"
and "linker list" (such a thing would greatly help in refactoring works) but this can be considered a bit ugly too (and is some
work at least few hours and probably a couple of times more as I am not to good at it) - but at large refectoring works (renaming branches of my tree and filanames - would be probably of much help)
 
Ps. maybe someone would like to wrote such script (or two scripts one for header and one for .o files ) and made them avaliable so I could try it?

 

I won't explain how a makefile works, you'll find good information in google. Start by a tutorial to understand what's the purpose of it: http://mrbook.org/tutorials/make/ (first result searching "makefile tutorial")

 

And then try something like this: http://lackof.org/taggart/hacking/make-example/ (first result searching "makefile recursive tutorial").

 

makefiles are not considered "a bit ugly", they're useful and the first option to handle those compilation tasks. It's not a hacky way to do it, it's a perfectly valid and standard way. There are other tools for this like CMake, but I guess you don't need more than make.



#9 Bregma   Crossbones+   -  Reputation: 4748

Like
3Likes
Like

Posted 03 June 2014 - 10:04 AM

No, make will not help you here.

 

The purpose of make is to rebuild only what has changed, and to make sure everything that depends on a change gets rebuilt (rebuild everything that's necessary, but no more).  You still need to specify dependencies somehow.  Much of the point of make is defeated if you have One Big Header that includes all other headers.

 

If you're using GNU make and the GNU compiler collection, you can use extensions that will calculate compile-time dependencies, eliminating much of the work:  that's how wrappers like the Linux kconfig system and the GNU autotools work:  you simply specifiy the translation units (generally .c or .cpp files) and the rest gets determined at build time.  Other wrappers like CMake reinvent that using their own separate codebase.  These will still give you build-time grief because of your "simplified" headers, but will be less work for you to maintain the dependencies.

 

You might also consider breaking your projects down into modules using static libraries, so each only needs to be rebuilt when the module changes and the public API is limited to the public library API:  this should simplify your link command lines and header structure in a good way, making any tight coupling manifest.

 

So, make on its own will not help you here, but using make in conjunction with wrapper tools developed over the last half century or so to solve your problems will definitely help solve your problems.  It's definitely worth your looking in to.


Stephen M. Webb
Professional Free Software Developer

#10 mhagain   Crossbones+   -  Reputation: 7413

Like
3Likes
Like

Posted 03 June 2014 - 10:30 AM

I'm not sure why you think any of this is in any way "simple".  An IDE will do it all automatically for you - no work certainly seems simpler than the amount of work you've put in so far.


It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#11 kunos   Crossbones+   -  Reputation: 2183

Like
2Likes
Like

Posted 03 June 2014 - 10:50 AM

An IDE will do it all automatically for you - no work certainly seems simpler than the amount of work you've put in so far.

 

at last.. some sanity


Stefano Casillo
Lead Programmer
TWITTER: @KunosStefano
AssettoCorsa - netKar PRO - Kunos Simulazioni

#12 fir   Members   -  Reputation: -444

Like
0Likes
Like

Posted 03 June 2014 - 11:16 AM

I'm not sure why you think any of this is in any way "simple".  An IDE will do it all automatically for you - no work certainly seems simpler than the amount of work you've put in so far.

I dont think  is simple - I said that it is not simple and i would need to make it much simpler

 

Does really IDE make this management simple (maybe yes, i dont know as Im not using ide), Does ide let me hold my modules in such kind of tree and compile it seperately and then link it globally? - which ide for example (im using mingw compiler)

 

Main thing here is refactoring if i just refacof base module file name or path (folder names) does rest of the thing change in automatic?

 

Im not using ide becouse i just prefer my old editor, some ide's editors i was trying was les suitable for my needs


Edited by fir, 03 June 2014 - 11:36 AM.


#13 fir   Members   -  Reputation: -444

Like
0Likes
Like

Posted 03 June 2014 - 11:30 AM

No, make will not help you here.

 

The purpose of make is to rebuild only what has changed, and to make sure everything that depends on a change gets rebuilt (rebuild everything that's necessary, but no more).  You still need to specify dependencies somehow.  Much of the point of make is defeated if you have One Big Header that includes all other headers.

 

If you're using GNU make and the GNU compiler collection, you can use extensions that will calculate compile-time dependencies, eliminating much of the work:  that's how wrappers like the Linux kconfig system and the GNU autotools work:  you simply specifiy the translation units (generally .c or .cpp files) and the rest gets determined at build time.  Other wrappers like CMake reinvent that using their own separate codebase.  These will still give you build-time grief because of your "simplified" headers, but will be less work for you to maintain the dependencies.

 

You might also consider breaking your projects down into modules using static libraries, so each only needs to be rebuilt when the module changes and the public API is limited to the public library API:  this should simplify your link command lines and header structure in a good way, making any tight coupling manifest.

 

So, make on its own will not help you here, but using make in conjunction with wrapper tools developed over the last half century or so to solve your problems will definitely help solve your problems.  It's definitely worth your looking in to.

 

I weakly understand this, though it seem to be on point (?)

 

I was thinked - one thing i would need from compiler and linker 9at least external helper tool) is an option "to compile/link all the sources/objects from a given folder" with no need of specyfing it all by hand - it is theoreticccly physically possible but i dont know such tool

... i can also make helper scripts as i said the task is well scoped

 

- scan given folder tree find all the header files and flush its paths and names into final summaric include file

- scan given folder tree find all .o files and flush this with path into some linker bat file

 

(maybe someone would be able quickly wrote this in some language or in c as a form of exe script?)

 

 

ps, as to breaking into libraries, i was trying it, but it showed that i usually work on each parts of projest at once (both libs and middle of the games, so after division it showed to be more burdensome work than without it)

 

ps one big header is really les work when you refactor, Im not sure if you aware of this or you overcome it in some way or I am mistaken here - say you have a header namet  "zmath.h"

and use it from 17 modules, - when you want change this name

 you need to search over miriads of modules and rename it if you just have one big header you only change the name there - i was doing constatnt ferectorization of such type so it was so much slowing me so i changed to one big header - at end of the work i can put manually its constens to each module and comment out unused modules but now I need heavy refactoring able environment


Edited by fir, 03 June 2014 - 11:43 AM.


#14 Zao   Members   -  Reputation: 878

Like
1Likes
Like

Posted 03 June 2014 - 12:23 PM

In some projects, includes are required to be relative to the top of the hierarchy. This allows you to have a single include directory, typically with an absolute path, and simply include headers by their full path relative that location.

 

g++ -I%FIR_ECOSYSTEM_ROOT% foo.cc

 

In which foo.cc would include files in a manner like:

 

#include <boost/thread/thread.hpp>

#include <my/awesome/graphics/buffer.hpp>

#include <my/awesome/network/buffer.hpp>

 

By retaining the path information in the include directive you gain location independence and the ability to distinguish between several headers with the same filename.


To make it is hell. To fail is divine.

#15 Bregma   Crossbones+   -  Reputation: 4748

Like
1Likes
Like

Posted 03 June 2014 - 12:37 PM


An IDE will do it all automatically for you

If, by 'automatic' you mean manually constructing the primary dependencies.  Dragging and dropping pictures of words in a picture of a directory hierarchy instead of editing a text file.  It's really the same amount of work, just a different medium.

 

Both are also still much less work than the way OP appears to be doing stuff at present.


Stephen M. Webb
Professional Free Software Developer

#16 Bregma   Crossbones+   -  Reputation: 4748

Like
1Likes
Like

Posted 03 June 2014 - 12:43 PM


- scan given folder tree find all .o files and flush this with path into some linker bat file

That required the object files be built first.

 

I strongly suggest you try to learn something like CMake.  It allows you to specify only the primary dependencies (.c or .cpp files) and the target binaries (.exe files) and the rest is magic.  In your case, you would use it to generate makefiles which you would run with make in the mingw environment.  You can even write a custom rule to generate to metamega header file.


Stephen M. Webb
Professional Free Software Developer

#17 fir   Members   -  Reputation: -444

Like
0Likes
Like

Posted 03 June 2014 - 01:03 PM

In some projects, includes are required to be relative to the top of the hierarchy. This allows you to have a single include directory, typically with an absolute path, and simply include headers by their full path relative that location.

 

g++ -I%FIR_ECOSYSTEM_ROOT% foo.cc

 

In which foo.cc would include files in a manner like:

 

#include <boost/thread/thread.hpp>

#include <my/awesome/graphics/buffer.hpp>

#include <my/awesome/network/buffer.hpp>

 

By retaining the path information in the include directive you gain location independence and the ability to distinguish between several headers with the same filename.

It would be ok If i can get this %THIS_ECOSYSTEM_ROOT% automaticaly by build system, becouse I hust ma copy my root projestc form one place to another - If i could gain this value it would be okay - is it possible to get this "current path" in bat or make? 

(though no it probably will not help me cause I would have to set it in one bat then read it another :C 



#18 fir   Members   -  Reputation: -444

Like
-1Likes
Like

Posted 03 June 2014 - 01:31 PM

 


- scan given folder tree find all .o files and flush this with path into some linker bat file

That required the object files be built first.

 

I strongly suggest you try to learn something like CMake.  It allows you to specify only the primary dependencies (.c or .cpp files) and the target binaries (.exe files) and the rest is magic.  In your case, you would use it to generate makefiles which you would run with make in the mingw environment.  You can even write a custom rule to generate to metamega header file.

 

 

can try the Cmake - also think I can build my own scripts that would build automaticaly the common header and second that will link all the .o's in the folder tree into exe - ? - though this seem maybe a bit ugly ?  I dont know - writing those scripts seem easier than learning cmake.

- as tu uglines this script for scanning and automaticaly linking all the .obj into exe is ok imo, but those one for merging all the headers into one common is aestheticaly worse imo (as it produces something important which is outside of the set of the separate module folders 

 

dont know - mayve there are also some other possibilities..







PARTNERS