Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 10 Oct 2005
Online Last Active Today, 04:05 AM

#5205954 Managing State

Posted by haegarr on 22 January 2015 - 03:14 AM

Although not being as explicit as possible, the following approach may make a transition from the existing implementation easier:


Based on what Hodgman has written above, but for cases where not the entire state vector is put into each DrawItem, a reasonable default setting can be defined (perhaps dependent on the active graphic pipeline stage). Any explicitly set state overrides its default equivalent, of course. From the low level renderer's point of view, each draw call then has an entire set of state.


In practice, at the beginning of the draw call processing, the default set of states is copied onto a local set of states. The state as available by DrawItem is then written "piecewise" onto the local set. Then the local set is compared with the state set that represents the current GPU state, and any difference cause a call to the render context as well as adapting the latter set accordingly.

#5205948 Getting World Matrix (Mesh Space)

Posted by haegarr on 22 January 2015 - 02:46 AM

How do I get the the world matrix of the mesh based on the mesh space?

The vertex positions are given w.r.t. a reference space. This space is conventionally called the model space. Recomputing the vertex positions w.r.t. another space requires the existence of a model-to-other space transform. This transform must be explicit. It cannot be implicitly given by the vertexes themselves.


The only thing you can do is to estimate another origin by computing the center of all vertex positions (or something similar). This works for symmetric geometry reasonably well. Otherwise you need to handcraft the transform and add / apply it during import.



EDIT: L. Spiro's practical tipps ;D

Slam your head into a wall until you forget what OBJ files are 

#5205753 Resource Manager Design

Posted by haegarr on 21 January 2015 - 05:42 AM

I'll definitely start applying this in my library. Seems like it'll clean up quite a bit.

Fine :) But remember that it is only one way of all the possibilities. Use whatever is meaningful for you.


For example, in my own resource management, the ResourceLoader is not the part that deals with the file data. Also due to other reasons of de-coupling, I have ResourceType and IOFormat, both being base classes. IOFormat represents the format itself, e.g. it has stuff like "how competent are you to interpret this given block of data?". More important to this thread, however, is that it provides 2 abstract inner classes IOFormat::Importer and IOFormat::Exporter as well as 2 abstract factory methods one for the importer and one for the exporter. These both classes are which actually deal with file formatted data. The ResourceLoader itself is more or less just another thing in-between, since it uses a concrete ResourceType as well as a concrete IOFormat and IOFormat::Importer to fulfill its task.


To come to the point, somebody once said "there is no software problem that cannot be solved by another level of indirection, with the exception of having too many indirections". And so you have to decide time and again when your "too many levels" has been reached ...

#5205752 Quick texture array question

Posted by haegarr on 21 January 2015 - 05:11 AM

In other words:


One has to make a distinction between the texture object name (the number generated by glGenTexture, and used as 2nd parameter to glBindTexture, for example), and the texture unit address (the trailing number at the symbolic constant when calling glActiveTexture or the 2nd parameter of glUniform1i in your case). The former names the texture, the latter the texture unit.


A texture unit is a part of the GPU hardware. From the outside, you bind a texture object to the texture unit, so saying "when this texture unit is accessed, then the returned data should come from this texture object"; that is done in 2 steps, first by glActiveTexture to specify the texture unit, then by glBindTexture to specify the texture object. Then, within your shader script, you have a sampler. The sampler works on a texture unit. So you need to call glUniform to specify which texture unit should be used by the specific sampler.


In the end you have an indirect access of the sampler onto the texture data, with the texture unit in-between:

          texture object --bound_to--> texture unit <--read_from-- sampler

#5205745 Resource Manager Design

Posted by haegarr on 21 January 2015 - 04:21 AM

First of, what we discuss here is the OOP way of doing things, and that in a manner with respect to the mid to long run. It is just an approach to defeat problems that may occur in the future when software should be extended or reworked in some way. There is no constraint to follow it absolutely; software worked also before OOP was invented. In the end software development ever suffers from demand for high quality but low effort. But without being sensitive to the danger of what may happen when software grows, bad things will happen.



I should perhaps highlight that XML and binary, as long as used by me, is meant not as generic but as specific file format, i.e. "my game's specialized XML file format" and "my game's specialized binary file format".



1. Are you saying the ResourceManager should call a ResourceLoader? If so, is ResourceLoader split into "XMLResourceLoader" and "BinaryResourceLoader"?
Again; if so, why not just have a "loadXML()" and "loadBinary()" functions inside ResourceLoader? Is that what the bad thing is?

Yes, I mean that if the ResourceManager object has determined that a specific resource requested by a client is not available from the cache and hence need to be loaded, it delegates the loading to a ResourceLoader object.


How the ResourceLoader class works is another question. It could be an abstract base class, and XMLResourceLoader and BinaryResourceLoader are classes derived by inheriting ResourceLoader. So ResourceLoader provides the common API used by ResourceManager, and XMLResourceLoader and BinaryResourceLoader each one implement the interface, regarding of loading from an XML or binary source, respectively.


Is the ResourceManager free to make the distinction whether to use the one or other subtype of ResourceLoader? No, because the format of the data source defines which loader type is to be used, because one cannot use e.g. BinaryResourceLoader to read from an XML source. If ResourceLoader would publicly provide loadXML() and loadBinary(), the choice of what to use would be externalized, although one and only one of them will work anyway. But if ResourceLoader (i.e. the base class) provides a single abstract load() function, the ResourceManager has no need to know whether the source is XML or binary (or whatever else); it just invokes load() on the concretized ResourceLoader that is available to load the resource.


2. Should ResourceManager have all the ResourceContainers (which then contain Resource objects)?


The manager should have all containers that are needed to manage the resource types the manager is responsible for. I used this "weak" sentence because you may find it useful to manage resources for each game level separately, or (which means an inclusive or) you may find it useful to manage resources separated by type.


Should ResourceManager even call ResourceLoader?

In this discussion we use ResourceManager like a director. The manager knows of the loader, cache, ... and mediates between them; that is its job. When going this route, then there is no need to couple e.g. the ResourceLoader to the ResourceContainer just for the purpose to enable the loader to store the resource directly. The ResourceManager, on the other hand, is coupled to the ResourceContainer anyway, because it needs to check whether a resource is already loaded before wasting time by calling a loader.


Notice that the responsibility of ResourceLoader is to load a requested resource. Whether it is meaningful to load it and what happens to the resource after loading is out of the scope of the loader. Restricting the responsibility like so makes it easier to re-use the classes in other contexts. That is one of the points of maintainability.


3. Do you mean that the Sprite class should not have functions to write/read buffers? If so, do you mean there should be a SpriteSerializer class?

Serializing an object means to transform its data into an external representation. In other words, a specific format is used. But the format can change (e.g. XML vs. binary, or version evolution). To abstract this, we have ResourceLoader (and ResourceWriter if saving is supported). So SpriteSerializer would just be another concretization of ResourceWriter.


Notice please that a sprite (as example for any resource) is just a typed data container including some metadata. The animation sub-system alters its world transform, the graphic renderer reads it for the purpose of rendering, the resource loader is able to write data to it, and so on. The sprite itself, as a resource, does almost nothing by itself. If you want to support ResourceLoader / ResourceWriter directly from the resource, then at most something like the Memento pattern should be used.




Some more things to think about:


You have probably noticed that the loader classes so far are designed to abstract the format of the data source, ignoring the type of resource. (BTW: Here the XMLResourceLoader is not meant to understand XML files in general but XML files formatted to represent your game resources, of course.) Real world file formats often represent a single type of resource (see PNG, JPG, WAV, OBJ, ...). In such cases the concrete ResourceLoader (e.g. PngResourceLoader) is a-priorily defined to result in a specific resource type. But in cases of games we often deal with file formats that provide collections of resources (a.k.a. package files or archive files). So, if you want to support single resource loading, you need to support the internal directory that is inherent to such collections. This can be solved by implementing the concrete ResourceLoader to handle the collection stuff and to revert to other concrete ResourceLoader instances as soon as the type of the resource is determined.


Further, we have not examined the dimension of data sources. So far we load resources from files. That is fine in general because files are already abstractions, since they may be mass storage based or socket based. If ResourceLoader should be agnostic of what kind of file it is dealing with, the file has to be opened external to the ResourceLoader class. IMHO even better, any ResourceLoader should deal with an abstraction DataSource anyway.

#5205569 Resource Manager Design

Posted by haegarr on 20 January 2015 - 10:40 AM

I know this is controversial, but to be honest, that just makes it a lot more complicated than it needs to be.
It is probably because I'm still a novice, but I just feel adding tons of small classes do nothing more than make everything messy.



Sorry for being stubborn, but I need some serious convincing to see the faults of this.

The Sprite class, as an example, defines how a sprite is represented when being loaded into working memory, i.e. it manages sprite typical data for runtime purposes. So far so good. Now you add the possibility to load the data from a XML fragment, so that the Sprite class can construct its own instances. Okay, but we need to generate Sprite instances at runtime as well, so give it another factory method for this, too. If I have that, I could generate sprites by code and save them for later reload, so having a save method would be nice. Well, I want to support reading from and writing to files, but reading from and writing to memory would make networking more convenient, so methods for that are fine as well. Hmm, now that levels get bigger, I want to support binary data files as well as XML files. Oh wait, now that my Sprite class has this new fantastic feature, but existing files have not, I need to support a second generation of routines, and, since I want to support older engines, perhaps also saving routines. Now just putting in the render routines for OpenGL 3 for older machines and those for OpenGL 4 would complete it mostly. Support for mouse picking, because of the editor I'm planning, is a must of course. A bit of collision detection, and ...


Although exaggerated, the story above is what happens in reality. To defeat this from the very beginning, the single responsibility principle was defined. Sure, at the moment you say "I have only 3 responsibilities in my manager class, that is still maintainable", and you're right. However, this changes with time, and it always changes in the wrong direction if you not defeat it explicitly.


Notice that this does not mean that there is not something like a manager. However, such a manager would be a facade class, where clients find a concentrated API, but the work is done behind the facade by dedicated objects then, just controlled by the manager.


In the end, SRP simplifies the ability to exchange parts (for example the implementation of the resource cache), and to develop and test aspects separated (for example the versioning of resources on mass storage), and to still understand a piece of software after half a year or when being developed by another person, and to re-use it in another context as well. Its advantage is found in the mid to long run.

#5205219 Awareness AI

Posted by haegarr on 19 January 2015 - 02:50 AM

To make ApochPiQ's point of view even more clear, I'll exercise one possible route of thoughts (the one I've gone during working out an answer before reading ApochPiQ's posts):


When reading the OP and interpreting the statement made there, I see "suggesting the player that he can buy that item". What are the conditions for this? The player can buy an item if a) the item is for sale, and b) the item is offered, and c) the item's price can be payed. Now, that has not really something to do with variables like number of crashes and damage state and, err, how many gates have been passed (whatever that should mean in this context). Considering such variables means in fact not what can be bought but what is meaningful to be bought.


Well, with meaningfulness you have another condition to fulfill: It is an advantage to buy a specific item, be it as an addition or as an replacement (the latter comes into play to give damage another dimension). In reality it is even more complicated: It is more meaningful to buy item A and B, or else item C, because your budget allows just for the one or other decision?


So you are in the realm of optimization: What combination of all offered items give me the best with respect to a goal (winning a race), under consideration of restrictions (e.g. the budget). That is what you want to solve!


After working that out ... is a FSM good for this optimization? A big No! Is a decision tree good for this optimization? Also No! To be precise, both can be used, but you have to model all possible states explicitly. That would be okay if only a few items exist and variables are also very restrictive in count and possible states. But in general it ranges between far too much work and impossible to do. However, since this is an optimization problem, it can be assumed that optimization strategies exist which are able to solve it...

#5204891 AI Interface

Posted by haegarr on 17 January 2015 - 03:39 AM

Now AIs are simulating human players but I wonder how I should possibly implement the ai into the code?
- Should I make the AI use the player UI to simulate player behaviour or should the AI use direct commands but simulate as if it would be limited to the same UI restrictions as the player?

The unit (regardless of PC or NPC) should be controlled by an API on top of the unit's state. The interface allows for triggering actions and influencing variables like desired movement speed. This is in total what defines what the unit is able to do and how it is controlled. The player gets a player controller that translates (G)UI input into invocations of the unit control. The AI (how ever it is implemented) also invokes the unit control interface in the end. This way both the player as well as an AI is able to request for actions to be performed by the respective unit.


One advantage of the above approach is the clear distinction between layers of functionality. Regardless of the unit, if you have a controller that is able to support all actions provides by the unit, then it can be used to control it. Further, the underlying system like locomotion, animation, and physics are independent on how exactly the unit is controlled. Benefitting side effects: Want to take over control of another character, e.g. to check for its animation clips correctness during development? No problem (if you have a suitable controller, of course). Want to make the UI for the PC configurable? The unit's API tells you what is possible. Want to try out another AI method? You just need to go down to the level of actions (which belongs to AI anyway) but no further.


Notice that with the above the AI does not suffer from "the same UI restrictions as the player". Otherwise it would make no sense, IMHO. The (G)UI is what is named the interface for the human to control the machine. It is one of your tasks to implement a (G)UI and controller in the best foreseeable way to feed the unit's API. On the other hand, the unit's abilities are provided in a defined and consistent way and are the same for the player as well as for the AI (in the margins of a given game mechanics). The restrictions being inherent there are the same for players as well as for AI.

#5204452 User & AI character movement Plumbing

Posted by haegarr on 15 January 2015 - 05:28 AM

I would like the AI to decide what it wanted to do then send events containing the forces to achieve it's goals
Is this not a good idea?

If you actually mean physical force then it is not a good idea. Notice the overall complexity of a game. The human approach, especially but not only in software engineering, is to modularize a problem into smaller ones until the parts become (more or less) easy to manage. Further, coupling the parts only when and where meaningful keeps the maintainability of the whole construct.


One of the higher levels in this modularization process is often named "sub-systems". Input processing, graphics rendering, sound rendering, physics simulation, AI, networking, ... are such sub-systems. Each one has its task. You should not mix them up. It is okay if some sub-systems use the output of other sub-systems to fulfill their own task (this is the said coupling), but each sub-system has its own responsibilities.


Creating physical force is not the responsibility of AI. Instead, AI uses sensors to investigate the environment, checks the needs of the driven agent, makes decisions and finds goals perhaps considering knowledge and emotions and culture, makes plans to reach the goals, and steers the agent accordingly by executing belonging actions. This is already a sufficiently complex problem by itself. (Not each AI considers all enumerated topics, but I wanted to show margins in which AI works.)


Actions output by AI are still somewhat high level. Sub-systems at lower levels deal with them. E.g. actions belonging to movement of the agent may be processed by the locomotion sub-system. This again may drive the animation or physics or both sub-systems. In your case of dynamic motion, the locomotion sub-system may output some physical force as input to the physical simulation.



EDIT: BTW, I'm seconding Ashaman73 suggestion to switch over to kinematic control.

#5204445 User & AI character movement Plumbing

Posted by haegarr on 15 January 2015 - 04:51 AM

Does this make it difficult to program the AI? I haven't programmed AI before,

I'd make a distinction here in order to not clutter sub-systems: AI is about decision making under consideration of the known world state. Output of AI may be "go to X". If you actually want to use dynamic motion, then force may be the output of the locomotion system but not of the AI. Under this view, using force does not make AI itself more difficult, because AI itself is independent of that.

#5203437 OpenGL samplers, textures and texture units (design question)

Posted by haegarr on 11 January 2015 - 04:37 AM

Is that so? I always thought it's texture object stores sampling parameters.
I've made dx9 implementation of my render lib behave that way.

Its the old way that sampling parameters are stored with the texture object. But it was recognized that doing that isn't a clean way, because it is totally legal and perhaps wanted to change sampling although the texture pixel data is the same, or to change the pixel data while the sampling is kept. (One can say that texture objects violate the single responsibility principle.) The solution currently available is the sampler object which stores sampling parameters only. However, sampling parameters is not (yet) removed from texture objects. IIRC, if you bind a sampler object to a texture unit, then the sampling parameters of the texture object are ignored and those of the sampler object are used; and if no sampler object is bound, the sampling parameters within the texture object are used.

#5200385 do most games do a lot of dynamic memory allocation?

Posted by haegarr on 28 December 2014 - 01:57 AM

There are several useful allocation schemes besides pre-allocation: pool allocation, linear allocation, etc. All of these allow to dynamically allocate some memory without suffering the costs of new/delete.

#5199950 intersection of two lines

Posted by haegarr on 25 December 2014 - 04:47 AM

[...] but I can't understand how it works. Can someone please try and explain it?

The method is this: There a re 2 line segments p0p1 and p2p3 given. They can be expressed by using a ray equation (i.e. directed line) when the independent variable of the ray is restricted. In this case

     r01( s ) := p0 + s * ( p1 - p0 )   w/   0 <= s <= 1

     r23( t ) := p2 + t * ( p3 - p2 )   w/   0 <= t <= 1

(maybe in the OP s and t are exchanged; I'm too lazy to actually compute the result).
For the firt computation steps, you ignore the restriction, and ask for where the both rays are equal
     r01( s ) = r23( t )
If you solve this for s and t (it is a linear system with 2 unknowns and 2 equations in 2D), you finally have to check whether both s and t fall in the originally defined restriction.
It works perfectly. [...]
No, it does not, at least not on general: It breaks if the 2 lines are parallel / anti-parallel! In that case you'll get a division by zero when computing the value of ip.

#5198908 Animation questions (many questions :P)[SOLVED]

Posted by haegarr on 18 December 2014 - 02:59 AM

[...] I'll try to debugg each of those options, but it will probably take some time (there's a lot of things that could be going wrong for each of those, with a lot of options for some of them).

Yeah, that's the reason why I spoke of a very simple manually made (i.e. hardcoded data) model, if possible. If you have a knowingly well defined model, any exporter / importer problem would be excluded. If that simple model renders well (without animation), I would test some different bone settings (still without animation) to ensure that the skinning works well. If also this is okay, then I'd add some simple animation. At the very end, I'd enable the importer and test with more complex set up.

#5198748 Animation questions (many questions :P)[SOLVED]

Posted by haegarr on 17 December 2014 - 06:07 AM

A thought about the proceeding: I would go with at most 4 bones per vertex for now. If that works, going for more is a question of refactoring. I would also consider to build up a test scene manually, so having a chance to simplify the model to the bare needs. While having an overview is is fine for knowing where to go, a non-incremental developing makes failure tracking difficult.


For now: Good luck! :)