Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 16 Oct 1999
Offline Last Active Today, 01:59 AM

#5235081 Why do you need the .h .cpp AND .lib files?

Posted by Aldacron on 16 June 2015 - 04:43 AM

I think this is one of those cases where analogies just aren't the best way to answer. Better to get your hands dirty. Please bear with me through the parts you are already familiar with.

Create three C files like so:

// fun1.c

#include <stdio.h>
void fun1(void) {
    puts("Running fun1.");
// fun2.c
#include <stdio.h>
void fun2(void) {
    puts("Running fun2.");
// main.c
extern void fun1(void);
extern void fun2(void);
int main(int argc, char **argv) {

You could put the prototypes of fun1 and fun2 in a header file and include it in main.c. Were you to do so, the C preprocessor would just replace the #include directive with the content of the header file anyway. The preprocessor is just an elaborate text substitution machine. Headers allow you to reuse the same declarations across multiple files without typing them over and over and over. Anyway, no need for one here.

Given a GCC compiler (I'm using the tdm64 distribution of MinGW on Windows), you can compile all three files into an executable with the following command line:

gcc main.c fun1.c fun2.c -odemo

Doing so results in an executable, but no intermediate files. The compiler has taken all three source files, translated them to the final output format, and passed everything directly to the linker to create the executable. This approach requires all of the source files. We can split it up, though, like this.

gcc -c fun1.c
gcc -c fun2.c
gcc main.c fun1.o fun2.o -odemo

The -c flag tells the compiler to compile the source file and store the output in an intermediate file (called an object file with a .o extension in this case), so the first two lines produce the files fun1.o and fun2.o. The third line creates the executable using the object files instead of the source files. Now, you can distribute fun1.o and fun2.o to other people (in which case you would want to create a header with the function prototypes) and they never need to see your original source.

Distributing two object files isn't a big deal, but if you've got several of them, then it can be a bit of an annoyance for the users of your project to have to pass all those object files to the compiler. So you can bundle them up into a library (or archive) to make it simpler. Given that you've already created fun1.o and fun2.o, you can then do this.

ar rcs libfuns.a fun1.o fun2.o

With this, ar (the GNU archiver, or librarian) takes the two object files and packs them together in a library file, much as you would add files to a zip archive (except they aren't compressed in this case). When you have dozens of object files, it's much more convenient to pack them into a library like this. Then the user can do the following.

gcc main.c libfuns.a -odemo.exe

Or, more commonly (given that a library is usually not in the same directory as the source)

gcc main.c -L. -lfuns -odemo.exe

-L tells the linker to search a specific directory, in this case the current directory (specified via the '.') for any libraries. The -l (lowercase L) in this case says to link with libfuns.a.

So, to directly answer your question, you don't need the YAML source to build your project, you only need the library and the YAML headers. However, to get the YAML library, you first have to compile the YAML source if it isn't available for download somewhere. Then you can link the library with your project. Alternatively, you could add the YAML source directly into your build system and compile it together, but then you have to worry about how to configure the source, which compiler options to use and so on. It's typically a much better choice to use the build system that ships with any library project to compile the library separately from your project, then link with the resulting binary. That way, you let the library maintainers worry about how best to configure the compiler output for the library and you can focus on configuring the output of your own project.

#5201221 Problem with a getter function

Posted by Aldacron on 01 January 2015 - 10:30 PM

The foo in your main function is *not* the same instance as the foo in your Moo instance. They are two different objects. Change your main method to this to see what I mean:

#include <iostream>

#include "Moo.h"

using std::cout;
using std::endl;

int main()


    bool running = true;

    Foo foo;







    return 0;



You will need to call moo.foo.Update() to do what you want, either by adding an update method to Moo that calls it internally or some other means.

#5200730 Cannot grasp the concept of delegation

Posted by Aldacron on 30 December 2014 - 12:05 AM

I think the bit about multiple-inheritance is distracting you. The main point was given in the Cat example at the bottom.

If you look at that sample code, the cat class has a member 'sound' which implements the ISoundBehavior interface. That interface has a 'makeSound' method. Cat also has a 'makeSound' method, but it doesn't actually make a sound itself. Instead, it "delegates" to the sound member, by calling sound.makeSound.

It's just like when a manager in an office gets assigned a task from a superior. The manager isn't likely going to do all of the work related to the task, instead delegating it to the appropriate team members. This is the same thing -- Cat doesn't actually make a sound, but delegates to a member. Notice that the Cat class also has a setSoundBehavior method. This means you can change the sound behavior of a Cat instance from MeowSound to RoarSound. If Cat had a concrete implementation of makeSound, meaning Cat itself made the meow sound and didn't delegate to a member, this wouldn't be possible.

Now, let's look at that line "Delegation is like inheritance done manually through composition." All that means is that delegation simulates inheritance.

Let's say that class A has a method "makeFoo" and class B has a method "makeBar." Now you want class C to have the same methods and to be treated as both an A and a B -- meaning any method that takes an A and any method that takes a B can both take a C. In C++, you could use multiple inheritance. That simply means that C inherits from both A and B, so it has both makeFoo and makeBar methods -- C is an A and C is a B.

In Java, multiple inheritance is not an option -- you only have single inheritance. However, we do have multiple interface inheritance in Java. You can inherit from exactly one class, but as many interfaces as you want. The Single Inheritance with Delegation example they give is basically saying that C inherits from A, so has all of A's methods; and C has a member of type B, plus the same interface as B. All calls to the methods on C that have B's interface are delegated to the B member.

I don't think that's a good example, though. So take a look at this one.

public interface A {
    void makeFoo();


public interface B {

    void makeBar();


class AImpl implements A {

    public void makeFoo() {

        System.out.println( "I made a Foo." );



class BImpl implements B {

    public void makeBar() {

        System.out.println( "I made a Bar" );



class C implements A, B {

    private A a = new AImpl();

    private B b = new BImpl();

    public void makeFoo() {

        // Delegate to a



    public void makeBar() {

        // Delegate to b




So now C can be passed around as an A or as a B and can behave as an AImpl and a BImpl. In C++, you would be able to inherit directly from AImpl and BImpl, but in Java you can only simulate it using delegates. Even better, if you want to use a different implementation of A or B, such as AImpl2 or BImpl33, you can add setA and setB methods to change behavior at runtime.

#5165458 LWJGL Cahce Textures in a Hashmap

Posted by Aldacron on 07 July 2014 - 11:11 PM

I haven't used slick-utils, but if you can't find any option for caching in the documentation, then it's very simple to implement it yourself.

  • Create your own TextureLoader class which contains an instance of the slick-utils loader (unless the Slick loader uses static methods, of course) and a java.util.HashMap<String,Texture>.
  • Implement a load method that accepts a file name and returns a texture.
  • Inside your load method, take the following steps:
    • Try to fetch the texture from your cache by calling 'get' on your hash map instance using the file name as a key.
    • If the 'get' call returns null, call the Slick texture loader, add the loaded texture to the hash map with the file name as the key, then return it.
    • If the 'get' call doesn't return null, then simply return the texture.
  • Profit!

#5165164 Java LWJGL Need Help Designing more efficient texture loading.

Posted by Aldacron on 06 July 2014 - 07:44 PM

Have your texture loader cache your textures in a HashMap keyed on the file name. When you ask it to load a texture, it first checks the cache to see if the texture exists. If it does, it returns the existing instance rather than loading it from disk again.

#5162690 java Constructor parameters

Posted by Aldacron on 24 June 2014 - 08:24 PM

You seem to be fundamentally misunderstanding something. Your specific item types, like Weapon, shouldn't contain an item. That isn't going to help you at all. For a purely componentized approach, your item class should hold a list (or map) of Components. Specific components give the item its properties.

For example, you would have a Component base class and some subclasses might be WeaponComponent and BuffComponent. Your Item class maintains properties common to all items, like durability, weight, isTradeable, or whatever. A WeaponComponent would maintain properties that differentiate weapons from other items, damageAmount, damageType, requiredSkill, and so on. Using this approach, you could define your weapon components in a JSON file (or YAML/XML/Take Your Pick). Swords, maces, and bows become defined purely by data, rather than by concrete classes. Then, when you load in a weapon, you can do something like this:


WeaponComponent weapon = new WeaponComponent( /* values from wherever go here */ );
Item item = new Item( /* item values for this particular weapon * / );
item.addComponent( weapon );

// Let's make it magic
BuffComponent buff = new BuffComponent( /* buff values from somewhere */ );
item.addComponent( buff );

itemsList.add( item );

Using this, you can turn any item in the game, such as a chair, into a weapon. You can give any item, whether it's a weapon or not, a magic buff. It gives you a great deal of flexibility. However, you also increase the complexity a bit. For one thing, you'll need a way to tell if an item has a specific component or not. For example, when the player wants to attack an NPC, you'll want to check if the item he selects to attack with actually has a weapon component. How you do this mostly depends on what sort of container you choose to hold an Item's components, but also whether you want the item types to be fixed or want to allow modders to add new items, and so on.

Between the extreme of the purely data-driven component-based approach and the other extreme of the strict class hierarchy lies a middle ground where you can come up with a sort of hybrid system. In other words, there is more than one way to skin this cat. Every approach will come with its pros and cons. For games that are small in scope and that don't need the benefits of a component-based system, the class hierarchy is perfectly fine. Implementing one of these systems is the easy part. The hard part in making yourself aware of all of the options available to you and learning how to choose the approach that fits your needs.

#5160766 Just a couple of Data-Oriented Design questions.

Posted by Aldacron on 16 June 2014 - 12:02 AM

Personally, I think we get too lost sometimes in the rigidity of the 'rules' of a given paradigm. It's the reason why you often see questions around here about "true" or "real" OOP. Instead, we should be viewing the rules as loose sets of guidelines that can help you achieve a specific end.

In the case of DOD, I see that the end goal is a more cache-friendly data layout. What that means in practice is going to vary from project to project. In some cases it may very well be possible to have a very clear distinction between data types such that you can keep everything in separate arrays and without sacrificing cache coherency in the slightest. In others, it may mean that some data ought to be interleaved, or it might mean a broader category of objects that group related data (like your Asteroids example with the xforms and motions).

The short of it is, in order to understand how best to layout your data, you need to understand how you're using it. IMO, you're absolutely on the right track and asking all the right questions. Just don't box yourself in by trying to fit into a rigid ideal, but rather adapt the paradigm to your particular use case.  

#5160211 3rd Party DLLs / 32bit64bit / Directories / Loading

Posted by Aldacron on 12 June 2014 - 10:26 PM

But now I would simply like to sort things into sub folders. With the exception of my dlls, Ive never called load library on 3rd party dlls, theyve simply loaded from the exe's dir.
Now Id like to move them, and Im wondering whats the best way to go about it.

If you don't want to link statically, move your executables into separate subdirectories along with the bit-specific DLLs.


Then you can continue to link with the import libraries and the DLLs will be loaded automatically as normal. As for loading resources and your custom dlls, you can use the Win32 API to determine the directory in which the executable lives and use that as the base for finding the resource directories. Or, since you mentioned PhysFS, it provides a cross-platform way to do this. Look into PHSYFS_getBaseDir. Or, assuming you've configured PHSYFS to put the base directory (or the directory above it, which would be more appropriate in your case) on the search path, you can use the PHYSFS API to load the files and not worry about specifying the full path.


#5157312 Which will be better for a beginner, SDL, SFML or OpenGL?

Posted by Aldacron on 01 June 2014 - 05:56 AM

For 2D games, you can do well enough with either SDL or SFML. Both have an accelerated 2D rendering API that uses OpenGL under the hood (SDL can also use DIrect3D). If you use OpenGL directly, you'll still find SDL, SFML, or something else (like GLFW) useful to abstract the platform away and make it easier to create the game window and deal with system events, but you'll be implementing your 2D renderer from scratch. Since you're just starting out, implementing your own renderer isn't likely a great way to go right now. Just pick one of SFML and SDL (or even Allegro) and go for it.

#5149364 Any way to make this simple tennis game more OO?

Posted by Aldacron on 25 April 2014 - 06:30 AM

I don't really agree with this.
Nothing about OOP is rigid, with the exception of cross-cutting concern implementations which lead to scattering (code duplication) and tangling (classes that have to do things they should not have to worry about).


When keeping a strict focus on an object-oriented design it is extremely easy to get carried away and wind up with a very inflexible architecture that is difficult to maintain. Not all of the traps are inherent to OO, but they are, IMO, easier to fall into. A few  major examples off the top of my head are inheritance hierarchies that go too deep, classes at too granular a level, and tight coupling between systems. Focusing on whether or not your code is "OO enough" can easily lead down these paths. I have seen this more times than I can count (particularly when I was in the world of Java web apps a decade or so ago). And it's extremely easy for beginners to fall into this because they don't have the experience behind them to understand the side effects of a deep object hierarchy, or why they probably don't need separate classes for their forks and spoons and every item in the game.

Rather than focusing on whether a codebase is "OO enough", a better way to look at it is to take each module or system in isolation and take the approach that is appropriate for that purpose. There's plenty of solid advice out there on how to do that, such as keeping object hierarchies as flat as possible, programming to interfaces where it makes sense, choosing free functions over member functions when possible, and so on.

So I would suggest to the OP (and anyone else) not to sweat it too much about "how OO" to go, to get as broad an experience as possible in different paradigms through reading and implementation, and that the primary focus of any project should (ideally) be maintainability, not whether or not the code adheres to a particular paradigm. Of course, the definition of maintainability will change depending upon the size and scope of the project, but the end goal is still the same.

#5149286 Any way to make this simple tennis game more OO?

Posted by Aldacron on 24 April 2014 - 09:45 PM

Thanks for all your input guys, this is really helpful I'll take it a step at a time and try and refine it more and more into fitting the OOP paradigm. In general for Game Development how OO should you go? I'm a little new on all of this so any more info (or if you need me to provide more) would be appreciated smile.png


Thanks once again!


IMO, that's the wrong way to think about software architecture. It doesn't matter "how OO" a codebase is. What matters is how easy it is to maintain. Object orientation is just one tool of several that you can use to get the job done. The danger of focusing so much on OO is that you wind up with a rigid, inflexible monster that makes it impossible to make changes without negative consequences (like code breakage, or increased complexity of implementation).

Somewhere there is a balance between procedural spaghetti code and the inflexible wall of rigid OOP. Finding that balance is a matter of experience. The more you read about these techniques and the more code you write, the more you'll get a feel for flexible design. Try writing a language that doesn't have OO built-in (like C) so you can see the other side. You'll find yourself implementing objects in terms of structs and free functions, but you won't have all the extra help like private members, inheritance and such. I think that can help tremendously in understanding where the OO paradigm is useful and where it doesn't matter (or gets in the way).

#5143192 Opengl standard libraries

Posted by Aldacron on 29 March 2014 - 10:14 PM

Is this true? Because standard header file in c, c++, or java give prototypes and define constants for functions that are going to be used and have nothing to do with the linking process. 

Kaptein is describing the entire compile/link process. The OpenGL headers are C headers. You are correct that they declare prototypes and constants (you're wrong about Java, though, as there are no headers or prototypes there, but I won't expound on that) and are not directly involved in the link step. But, when you call a prototyped function, the linker expects the implementation of that function to be in an object file or library somewhere. If it can't find the implementation, you will get linker errors. So the header files do have an impact on the link step in that regard.

For modern OpenGL, the reference headers are available for download at opengl.org (you may have to scroll down that page just a bit to see them). There are four available there. The primary OpenGL functionality is all in glcorearb.h and, for many purposes, is all you need. There are three other headers there that offer additional extensions and deprecated functions for those who need them: glext.h, wglext.h, and glxext.h (the latter two are platform-specific). A brief description of each header is given on that page.

#5143191 Opengl standard libraries

Posted by Aldacron on 29 March 2014 - 10:00 PM

There is only one OpenGL library. If you are using C or C++ (or languages that can link directly with C libraries) then you will want to link with libGL when usnig gcc-based toolchains and, on Windows, OpenGL32.lib when using MSVC or compatible compilers.

#5141609 Will a 2D Game Engine complex to make?>

Posted by Aldacron on 23 March 2014 - 11:15 PM

The short answer: the complexity of a 2D engine depends largely on how you choose to implement the renderer and what sort of features you decide to support. You can look at it in terms of technical complexity and architectural complexity.

Go back to the 90s and the renderers were called 'blitters' and often involved handcrafted assembly and algorithmic tricks that caused faster code to be generated (all of which are largely irrelevant now with the modern optimizing compilers we can work with). Today, you can choose between a variety of graphics APIs, both hardware accelerated and pure software, to use as the backend of a sprite renderer. Using deprecated, immediate mode OpenGL or older versions of Direct3D (or even DirectDraw) might be less complex than other approaches, whereas using a shader-based approach can introduce a range of complexity. This is the sort of thing that significantly contributes to the technical complexity. 

Then there's features. Each feature you support will introduce a certain amount of technical complexity in the implementation details, but also architectural complexity in how they interact with each other. The more features you add, the easier it is to find yourself with a dysfunctional API without a good sense of planning ahead.

Consider a minimal functioning 2D engine: basic sprites (with no transparency or animation), basic audio (sound effects). You can make a number of games with that alone and is about as small of a complexity as you can have. Start adding in more features like transparent sprites, animation, basic collision, streaming audio, networking and you greatly increase the range of games you make, but you're also driving up complexity both technically and architecturally. Add more features like particles, lighting, rigid body physics, support for thousands of players, and other advanced concepts and you continue to increase the complexity.

I think making a 2D engine is a great learning exercise, especially if you take this approach of increasing the complexity. 15 years ago it was much like a right of passage for aspiring game developers, but there wasn't a whole lot of room for features -- most 2D engines back then were all fairly similar in terms of the feature set. There's a much broader range of possibilities these days, so start out with something like the minimal engine I mentioned above, perhaps using shader-based OpenGL or D3D for the renderer backend. Use that to make some simple games, like Pong, Breakout, and Tetris. Then look at adding additional features and consider how you need to expand or modify your architecture to support them. Then make more games using the new feature(s). Then go back and do it again (another new feature and a couple of new games using it). This will give you some valuable experience. Then later on, if it's something you're interested in, you can take that experience and apply it to developing a simple 3D engine, or in using an existing, fully-featured engine to make a more complex sort of game. 

#5135995 Recursion in C Programming: Confusion Begins

Posted by Aldacron on 02 March 2014 - 10:11 PM

The first time you call gcd, x=54 and y=24. The y==0 test fails because y is 24, so gcd is called again with x=54 x=24 and y= (54%24) = 6. The second call to gcd knows nothing about the x and y in the first call, so all it sees are 54 24 and 6. Here, y still is not equal to 0, so gcd is called again with x=54 x=6 and y = 54 24%6 = 0. On the third call, y==0 is true, so the function returns 54 6. Control returns back to the second call, which directly returns the result of the third call (54). Control returns to the first call, which directly returns the result of the second call. Therefore, printf prints "The gcd of 54 and 24 is 54 6."

EDIT: To the OP: I hope I didn't add too much to your confusion! I saw the line gcd(y, x%y) as gcd(x, x%y). Had I considered the meaning of gcd, I might have caught the error. Though the numbers were wrong, the rest of the post holds true.

No matter how many function arguments you have, recursive calls continue until the base condition is met (i.e. when you tell it to). If you did not include the test for y==0 in the function, you would have a case of infinite recursion. Each call has its own copies of the parameters and cannot see the values in the previous calls. When the base condition is met, the function returns all the way back up the call stack. Mechanically, it's the same as a normal function call process. a() calls b() calls c() calls d(), and when d finishes, control returns back up the callstack to a(). The only difference with recursive functions is that it's one function calling itself.