Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 10 Oct 2005
Offline Last Active Today, 01:36 AM

#5204891 AI Interface

Posted by haegarr on 17 January 2015 - 03:39 AM

Now AIs are simulating human players but I wonder how I should possibly implement the ai into the code?
- Should I make the AI use the player UI to simulate player behaviour or should the AI use direct commands but simulate as if it would be limited to the same UI restrictions as the player?

The unit (regardless of PC or NPC) should be controlled by an API on top of the unit's state. The interface allows for triggering actions and influencing variables like desired movement speed. This is in total what defines what the unit is able to do and how it is controlled. The player gets a player controller that translates (G)UI input into invocations of the unit control. The AI (how ever it is implemented) also invokes the unit control interface in the end. This way both the player as well as an AI is able to request for actions to be performed by the respective unit.


One advantage of the above approach is the clear distinction between layers of functionality. Regardless of the unit, if you have a controller that is able to support all actions provides by the unit, then it can be used to control it. Further, the underlying system like locomotion, animation, and physics are independent on how exactly the unit is controlled. Benefitting side effects: Want to take over control of another character, e.g. to check for its animation clips correctness during development? No problem (if you have a suitable controller, of course). Want to make the UI for the PC configurable? The unit's API tells you what is possible. Want to try out another AI method? You just need to go down to the level of actions (which belongs to AI anyway) but no further.


Notice that with the above the AI does not suffer from "the same UI restrictions as the player". Otherwise it would make no sense, IMHO. The (G)UI is what is named the interface for the human to control the machine. It is one of your tasks to implement a (G)UI and controller in the best foreseeable way to feed the unit's API. On the other hand, the unit's abilities are provided in a defined and consistent way and are the same for the player as well as for the AI (in the margins of a given game mechanics). The restrictions being inherent there are the same for players as well as for AI.

#5204452 User & AI character movement Plumbing

Posted by haegarr on 15 January 2015 - 05:28 AM

I would like the AI to decide what it wanted to do then send events containing the forces to achieve it's goals
Is this not a good idea?

If you actually mean physical force then it is not a good idea. Notice the overall complexity of a game. The human approach, especially but not only in software engineering, is to modularize a problem into smaller ones until the parts become (more or less) easy to manage. Further, coupling the parts only when and where meaningful keeps the maintainability of the whole construct.


One of the higher levels in this modularization process is often named "sub-systems". Input processing, graphics rendering, sound rendering, physics simulation, AI, networking, ... are such sub-systems. Each one has its task. You should not mix them up. It is okay if some sub-systems use the output of other sub-systems to fulfill their own task (this is the said coupling), but each sub-system has its own responsibilities.


Creating physical force is not the responsibility of AI. Instead, AI uses sensors to investigate the environment, checks the needs of the driven agent, makes decisions and finds goals perhaps considering knowledge and emotions and culture, makes plans to reach the goals, and steers the agent accordingly by executing belonging actions. This is already a sufficiently complex problem by itself. (Not each AI considers all enumerated topics, but I wanted to show margins in which AI works.)


Actions output by AI are still somewhat high level. Sub-systems at lower levels deal with them. E.g. actions belonging to movement of the agent may be processed by the locomotion sub-system. This again may drive the animation or physics or both sub-systems. In your case of dynamic motion, the locomotion sub-system may output some physical force as input to the physical simulation.



EDIT: BTW, I'm seconding Ashaman73 suggestion to switch over to kinematic control.

#5204445 User & AI character movement Plumbing

Posted by haegarr on 15 January 2015 - 04:51 AM

Does this make it difficult to program the AI? I haven't programmed AI before,

I'd make a distinction here in order to not clutter sub-systems: AI is about decision making under consideration of the known world state. Output of AI may be "go to X". If you actually want to use dynamic motion, then force may be the output of the locomotion system but not of the AI. Under this view, using force does not make AI itself more difficult, because AI itself is independent of that.

#5203437 OpenGL samplers, textures and texture units (design question)

Posted by haegarr on 11 January 2015 - 04:37 AM

Is that so? I always thought it's texture object stores sampling parameters.
I've made dx9 implementation of my render lib behave that way.

Its the old way that sampling parameters are stored with the texture object. But it was recognized that doing that isn't a clean way, because it is totally legal and perhaps wanted to change sampling although the texture pixel data is the same, or to change the pixel data while the sampling is kept. (One can say that texture objects violate the single responsibility principle.) The solution currently available is the sampler object which stores sampling parameters only. However, sampling parameters is not (yet) removed from texture objects. IIRC, if you bind a sampler object to a texture unit, then the sampling parameters of the texture object are ignored and those of the sampler object are used; and if no sampler object is bound, the sampling parameters within the texture object are used.

#5200385 do most games do a lot of dynamic memory allocation?

Posted by haegarr on 28 December 2014 - 01:57 AM

There are several useful allocation schemes besides pre-allocation: pool allocation, linear allocation, etc. All of these allow to dynamically allocate some memory without suffering the costs of new/delete.

#5199950 intersection of two lines

Posted by haegarr on 25 December 2014 - 04:47 AM

[...] but I can't understand how it works. Can someone please try and explain it?

The method is this: There a re 2 line segments p0p1 and p2p3 given. They can be expressed by using a ray equation (i.e. directed line) when the independent variable of the ray is restricted. In this case

     r01( s ) := p0 + s * ( p1 - p0 )   w/   0 <= s <= 1

     r23( t ) := p2 + t * ( p3 - p2 )   w/   0 <= t <= 1

(maybe in the OP s and t are exchanged; I'm too lazy to actually compute the result).
For the firt computation steps, you ignore the restriction, and ask for where the both rays are equal
     r01( s ) = r23( t )
If you solve this for s and t (it is a linear system with 2 unknowns and 2 equations in 2D), you finally have to check whether both s and t fall in the originally defined restriction.
It works perfectly. [...]
No, it does not, at least not on general: It breaks if the 2 lines are parallel / anti-parallel! In that case you'll get a division by zero when computing the value of ip.

#5198908 Animation questions (many questions :P)[SOLVED]

Posted by haegarr on 18 December 2014 - 02:59 AM

[...] I'll try to debugg each of those options, but it will probably take some time (there's a lot of things that could be going wrong for each of those, with a lot of options for some of them).

Yeah, that's the reason why I spoke of a very simple manually made (i.e. hardcoded data) model, if possible. If you have a knowingly well defined model, any exporter / importer problem would be excluded. If that simple model renders well (without animation), I would test some different bone settings (still without animation) to ensure that the skinning works well. If also this is okay, then I'd add some simple animation. At the very end, I'd enable the importer and test with more complex set up.

#5198748 Animation questions (many questions :P)[SOLVED]

Posted by haegarr on 17 December 2014 - 06:07 AM

A thought about the proceeding: I would go with at most 4 bones per vertex for now. If that works, going for more is a question of refactoring. I would also consider to build up a test scene manually, so having a chance to simplify the model to the bare needs. While having an overview is is fine for knowing where to go, a non-incremental developing makes failure tracking difficult.


For now: Good luck! :)

#5198736 Animation questions (many questions :P)[SOLVED]

Posted by haegarr on 17 December 2014 - 03:24 AM

1: good catch on the timing. [...] Out of curiosity, is there a reason that ought to be throwing the assert? Should I just initialize the clock earlier in the program so there's a larger value passed in on the first iteration? This was one of the quirks of the tutorial I was struggling with.

Assertions are a runtime mechanism to confirm that the state of the program is as expected. WIth assertions inserted, you don't rely on things running well, but check for it. Not all misbehavior of the code can be seen on screen. For example, some mistakes may have a subtle effect that sums up over time. You can distinguish assertions for at least 3 purposes: To check requirements on function arguments at the very beginning of a function, to ensure return values of functions to meet the definition, and to check the general state. So, if an assertion fires, and the implementation of the assertion matches its intention, there is something wrong.


With respect to that particular assertion: You have to investigate why it is triggered. Put a if conditional with the inverted condition of the assertion just above the assertion, put a dummy statement into the if-body, and set a breakpoint onto the dummy statement. Then start the debugger and look what value Factor has. It may be that due to numerical imprecision the value is just a tiny bit outside the allowed interval, or else it is much outside. In the former case you suffer from a typical problem when using floating point numbers,  while in the latter case you have a mistake. How to continue depends on what it is.


I dislike the clock just reading the game running time (as, unless I'm misunderstanding things, it will just run the animation once at start-up), and want to change it to start a clock on a keypress or event, but I'll run in to the same assert throw, I believe. 
The game running time is a good source for timing animations because it is independent on the frame rate, of course. On every iteration of the main game loop, the game running time is fetched at one point and used as update time during that loop iteration. By this way all animations see the same moment in time / the same delta time. When an animation is started, the current update time is stored as start time for that animation instance. Further, the difference of current update time minus start time gives the local animation time, i.e. one that has value 0 at the beginning of the particular animation playback. This is in principal the time to be used for looking up keyframes.
Things get more complicated when animation loops are used. In this case the wrap at the animation loop's end can be used to adapt the start time to get a new one for each iteration. Notice that this adaption is not done by storing the current game running time; you have to consider the offset coming from exceeding the end of the loop.
Thing get even more complicated when animation blending comes into play, because animation blending often requires time acceleration. But let this aside for a later inspection.
4: I bind the VAO earlier in the program when generating the terrain. Is there any reason to use a second one or rebind anything, or should the first call be fine? Apologies for not including that in the shown code.
It is sufficient to have a single VAO active all the time if you deal with the VBOs in the way existing before the OpenGL 3 era, or anyway if you have just a single VBO. In fact, VAOs are intended to simplify switching of VBOs by having one VAO per configuration, because only 1 OpenGL call is then necessary for switching. This should give some performance boost. Unfortunately, in the past some drivers performed bad on this as sources on the internet have reported. I don't know how it is nowadays.


and lastly, changed the size in glVertexAttribPointer to 32, to account for the 8 bones (or should this be 64? I would think it's still 32, since I don't want to pass the entire size of the VertexBoneData, only the half. I really need to devote a few weeks to just trying to nail down some of these details to understand things a little better. To note: I've tried both values.) [...]
What you want to pass is the byte offset from the beginning of the structure to the occurrence of the array of Weights. This is the byte-size of IDs[NUM_BONES_PER_VERTEX], which calculates to NUM_BONES_PER_VERTEX * sizeof(uint). Your best choice is to replace the literal constant by using the offsetof-operator, or at least by using the expression NUM_BONES_PER_VERTEX * sizeof(uint).
As said, you rely on uint being 4 bytes in size. That should be considered to by changed, e.g. by using uint32_t, although that isn't a guarantee either. But it gives at least a hint of what you want.
[...] Also, should that second value be 8 as well, or is the "size" of the int still 4?:
The value does not denote the size of the int but the count of elements in the vector passed in. That value is allowed to be 1, 2, 3, or else 4, because the vertex stage inputs of the GPU are 4 element vectors (elements not supplied are automatically completed with standard values). So "8" is not an option.


I need to update the vertex shader to account for the increase in the number of bones. Simply adding more bones in the shader doesn't work since it's a vec4, so I'll need to adapt that...somehow. I have some work to do in the meantime (real work sadly) but when I'm done I'll try to update that to account for it. Hopefully that will solve the issues. If anyone has an idea how to go about that, I'll likely try to pass in an array of 8 or a 4x4 matrix and pass in 16 bones. Unless there's a nifty vec8 (glsl doesn't seem to think so ). Since it's reading...uniquely...from that map, where it pulls the integer then the float, I'm a little flustered with how to get the data. Simply trying to adapt the shader to read:
layout (location = 5) in int BoneIDs[8];
layout (location = 6) in float Weights[8]; 
results in the error: "warning:  two vertex attribute variables (named Weights[0] and BoneIDs[1]) were assigned to the same generic vertex attribute." I'm hoping I dont' have to suss that map into two separate data sets, though part of my brain would like that very much.

As said above, a vertex input is 4 elements wide. A "location" defines the index of the vertex stage input to use. So each "location" stands for a 4 element vector. When you write

    layout (location = 5) in int BoneIDs[8];

you ask the GPU to reserve 8 inputs beginning at location 5. So this means locations 5, 6, 7, ..., 12, each one used to transfer one element on an int value (the remaining 3 elements being filled up with 0, 0, and 1). Then you specify the Weights in a similar way, asking for locations 6, 7, 8, ... 13. This cannot work and is hence denied by the shader compiler.


What you actually want to use is something like

    layout (location = 5) in ivec4 BoneIDs[2];
    layout (location = 7) in vec4 BoneIDs[2];

This means "allocate the vertex stage inputs at locations 5 and 6, each one for a 4 element int vector (giving you 8 integers in total), as well as the vertex stage inputs 7 and 8, each one for a 4 element float vector (giving you 8 floats in total)". This has an impact on the glVertexAttribPointer calls, of course.

#5198532 Animation questions (many questions :P)[SOLVED]

Posted by haegarr on 16 December 2014 - 08:05 AM

I worry a little that I'm setting up layout 5 and 6 (the bone information) incorrectly. I've never set the buffer data using a map, and it's possible I'm going about this incorrectly.

You are probably right with this assumption ;(


1.) Your struct VertexBoneData uses uint as type for IDs, and you rely on the uint being 4 bytes in size.


2.) The constant NUM_BONES_PER_VERTEX is set to 8. If tightly packed and sizeof(uint) is actually 4, then the block of IDs occupies 8*4=32 bytes, as well as the bock of Weights occupies 32 bytes. However, the offset specified for Weights within the call to the belonging glVertexAttribPointer is just 16, making the second half of IDs being interpreted as weights. This hints at NUM_BONES_PER_VERTEX should be 4 instead. Also the vertex shader script is written so that NUM_BONES_PER_VERTEX should be 4.


3.) I don't see a glGenBuffers(1, &boneBuffer) anywhere.


4.) I'm also missing a VAO. Although some implementation work without it, the specification for OpenGL 3.x has made it mandatory. You should use it to avoid spurious mistakes.

#5198521 Animation questions (many questions :P)[SOLVED]

Posted by haegarr on 16 December 2014 - 07:15 AM

... If Haegar answers this, I think I owe him my first-born child at this point

Although this is an attractive offer for sure, I have to confess that the amount of code in the OP is a bit ... discouraging. Rumpelstiltskin's deal was easier ;)


For now, being obvious due to its bolded typeface, this ...

If i print out the transformations, I do get occasional outputs such as "-9.53674e-07" 

... does not hint at a problem. It is just a kind of writing the number -9.53674*10-7 = 1.0 / ( -9.53674*107 ) = 1.0 / -95367400.0 which is simply close to zero.


    float Factor = (AnimationTime- (float)pNodeAnim->mRotationKeys[RotationIndex].mTime) / DeltaTime;

One thing that is at least confusing is that inside the position interpolation code as well as the scaling interpolation code you use the mTime of the rotation keyframe. Well, maybe that the keyframes of position, rotation and scaling are all at the same points in time, but even then you should use mPositionKeys[PositionIndex].mTime and mScalingKeys[ScalingIndex].mTime, resp., for the sake of clarity. If, however, the keyframes are not equally timed, your routines are definitely wrong.

#5198394 I don't know how can I solve that problem in efficient way .

Posted by haegarr on 15 December 2014 - 02:00 PM

Interpret the path as ray r(k) := A + k * (G-A), 0<=k<=1, compute all possible collisions, and choose those with the lowest k.

#5197969 How should i call my classes instead of "xyManager"?

Posted by haegarr on 13 December 2014 - 07:05 AM

ImageCache sounds fine, thanks.

Well ... notice the process: A client requests the resource library (to avoid the word "manager" here ;)) to return a specific resource. The library requests the cache for the resource. The cache does not store it, so returns nil. The library decides which storage allocator to use, and triggers the loader to load the resource by utilizing the storage allocator. The loader investigates its directory structure (keywords are archive file and file overriding) and detects the file. It looks for a suitable file format, requests an importer from it, and calls it to read the resource. The resource is then returned to the library. The library looks which cache policy is installed for that resource, and puts the resource into the belonging cache. It then returns to the calling client.


From such a description where the single tasks are sectored, one can see how many instances are actually involved in such a "simple" thing like resource management. Many of them can be agnostic of the actual resource type, some can (or should) not. Almost all of them need not know how the resource is encoded on mass storage. Many aspects can be configured (like the storage allocation policy, the cache policy) or are somewhat runtime dependent (like the file format) or are not relevant to the client (like is the resource cached or not). Implementing all this into a single class is wrong, as well as exposing it into the wild is wrong (*), regardless of how the class is named.


You have already mentioned that your process is not that comprehensive, and that you rely on 3rd party libraries. However, the above should demonstrate how a comprehensive solution may look like in detail, especially how many classes are involved in dependence on the inherent tasks.


(*) This is why I would not name the facade class "ImageCache".

#5197952 How should i call my classes instead of "xyManager"?

Posted by haegarr on 13 December 2014 - 05:39 AM

The OP is somewhat vague, so I write down some possible pitfalls I see:


Your ImageManager may be problematic since it violates the single responsibility principle if implemented as a single class. The single tasks possibly involved here are (a) allocating local storage, (b) reading a resource from mass storage, © uploading the resource, (d) implementing a cache strategy, (e) granting access. Okay, all this together means to manage the resource, but the tasks are too different to be implemented in a single class. Instead, if you understand the manager as a facade granting a small API in front of the wrapped functionality, it is IMHO okay to name it "manager" if you wish.


Your BuildingManager and UnitManager are even more problematic. They implement some kind of scene setting but also implement collision detection and update services!? These tasks are not related and IMHO can not even be subsumed in a single "management".


It reads like your EventManager can be named EventProcessor. However, if it executes commands that are generated by events, it is perhaps a CommandProcessor supplied by an EventDispatcher? Because "handle Commands and other events" doesn't seem me to fit in a single class.



EDIT: wintertime was a bit faster than me, and I second those post :)

#5197940 How do I add items into inventory of different types?

Posted by haegarr on 13 December 2014 - 03:20 AM

Using inheritance just for the sake of distinguishing kinds of items does not really make sense. If this is the only intention, then a solution with an enum is just fine, also since you're anchoring the existence of armor already in the interface of Actor.


However, one problem that arises is that something like "is armor" or "is weapon" is a property but not a type, and should be understood as "is useable as armor" and so on. For example, a chain mail is primarily an armor but it is also a wearable, while a wooden shield is an armor but not a wearable, for example. Or, a sword is primarily a weapon, but a hammer may be a weapon but also a tool. Well, you game may ignore the manifold usability of items, of course, as many games do so.


Now, to come back to your question, OOP like inheritance is well used if the functional differences are hidden behind a common interface. For example, if Item class provides a Item::Usability enumeration as well as a re-implementable request interface similar to this 

class Item {

    enum class Usability : uint32_t {

    virtual int getIntProperty( Usability forUsability ) const {
        return 0;


a client can ask for any defined property by using a single generic function and without ever knowing what concrete type the item is of. The advantages of such a solution are

* adding a property does not have an effect on eventually existing derived classes, because the interface does not change,

* the returned number represents an effect, may be 0 (means "not usable in the requested way), positive or even negative,

* a single item can still respond to a couple of properties with a number unequal to 0,

The disadvantages are

* properties are typed, and for each type you need to have a function,

* a virtual function is used (well, this is not a disadvantage in principle, but it drains performance if called very often per frame).


This is, however, only one solution. Others exist as well.