Jump to content

  • Log In with Google      Sign In   
  • Create Account

haegarr

Member Since 10 Oct 2005
Online Last Active Today, 06:32 AM

#5227910 arrays

Posted by haegarr on 08 May 2015 - 12:57 AM

I'm not familiar with that SDK, but the following issues seem me exist:

 

1.) Reading 16 floats if just 1 is needed is wasteful.

2.) Allocating another 16 floats if 1 is used is wasteful.

3.) Within XPLMSetDatav you take the address of the 10-th element and apply the unary negation!?

 

IMHO the needed code snippet looks like so (but is untested):

    float value;
    XPLMGetDatavf(pnlBri, &value, 10, 1);
    value = 100-value;
    XPLMSetDatavf(pnlBri, &value, 10, 1);



#5226983 How to separate xyManager effectively into smaller responsibility classes?

Posted by haegarr on 03 May 2015 - 11:37 AM


Well, i am pretty sure passing some refs/pointers all around the code base just because somewhere deeply in the codebase somebody might want to use it is a bigger antipattern than using singletons...
And if 99 command from 100 don't use it, then i feel like it's unnecessary.

In such a case the design itself is probably wrong.

 


Well, maybe i will be unable to keep the "1 command pattern for every cases", but it's possible that from the same function i can create different type of commands, and that's problematic.
For example, i have a unit with these skills:
- Summon 2 skeleton (Unit pool required for execution)
- Give a shield to a building (Building pool required, it would be kinda strange if a building would have a pointer to the pool, which contains the building itself)
- Give vision to somewhere in the map (map, or whatever will handle the fog of war required)
 
The "solutions" i see:
- Make some classes singleton, so i can reach them when i need, but then... i ended up with using singletons, which is kinda sad.
- Passing pointers around to the pools and other stuffs, which is even worse than the singleton way and make it a nightmare to refactor anything.

Or: Have a few pointers that refer to high level objects, e.g. Scene / World / Level.

And/or: Use sub-systems and link them beforehand.

And/or: Store the commands in command queues and let the belonging sub-systems do their job when they are processed in the course of the game loop anyway.

 


I will probably have 1 for buildings, 1 for units, 1 for Images (to don't read the same images over and over again just because i have 10 units from the same type)

What are you doing exactly? From a game's point of view your answer makes me think that you are mixing up resources and scene objects. There is a difference between game objects that populate the scenery and resources that work as templates for game objects and/or storage for reusable, well, resources.




#5226925 Adding non ECS features in an ECS engine (tilemap)?

Posted by haegarr on 03 May 2015 - 03:55 AM

Fact is that OOP does not provide inheritance alone, but also composing. While the former allows for a kind of specialization / generalization thing, the latter allows for a kind of collaboration thing. ECS was an attempt to make the OOP object level composing an architectural pattern especially with the background of game development (later on "sub-systems", cache friendliness and so on are added). It is a fuss since the game industry has determined this way of doing things, although composing as mechanism exists since the dawn of OOP at all. It should be mentioned that, following the relevant literature, composing was mentioned to be generally preferred over inheritance years before Dungeon Siege. That said, even if you do not use an ECS, the basic principle of it should still be considered. (And: ECS is not the opposite of OOP.)

 

On the other hand, ECS is a tool like others. Do not use a tool if it does not fit the problem. Not all stuff in a game fits in it, that's for sure. Terrain, sky, foliage, … do not fit. The conclusion is simple: Do not try to make them fit, and hence do not base the entire engine on it. Your doubts mentioned in the OP are absolutely valid.

 

To overcome your problem, use systems as a hierarchy of services. Each more lower layer in the hierarchy provides more basic services and uses more general structures. Even rendering is not a single layer but consists of several layers. At the bottom there is the graphics layer dealing with triangles and textures (if done this way). Above it there may be a "sprite layer". Above it there may be … At the top, not counting to rendering at all, is the scene layer which deals with terrain / ground, sky, game objects / entities. Notice please that layers in this sense are by all means possibly a couple of sub-systems. Sub-systems may collaborate with other ones on the same layer, and may use sub-systems of lower layers.

 

That said, during rendering, terrain / ground will be converted to some kind of graphic primitives by some rendering layer(s) along a path from the scene layer down to the graphics layer. Game objects / entities will be converted also to some kind of graphic primitives, not necessarily the same primitives, and probably not along the same path as terrain / ground. If "sprite" is your graphic primitive of choice for both ground and entities, then you will have a ground renderer and an entity renderer in the end, the first using a ground representation as input and the latter using game objects as input, and both yielding in sprites as output.

 

So, question is to which layer(s) do Game::render and RenderSystem::update belong to? Both possible solutions shown in the OP violate a clear abstraction. First of, usually "update(time)" is used to, well, drive the update of the world state. It has nothing to do with rendering, and hence should not call a renderer, and RenderSystem::update(time) should not exists as such. Instead, after the world has been updated properly, the renderer can use the world state to just draw it.

 

Coming back to the problem: Game::render seems to be the top layer of rendering, so it should work on the level of scene objects (hence iterating the ground tile by tile is a no-go there). It may iterate all drawables in the scene and invoke their render() method. That render() methods will be the second layer of rendering and perhaps already output the sprite primitives.

 

All the above: My 2 cents, of course ;) 




#5226810 Animate a polygon

Posted by haegarr on 02 May 2015 - 01:59 AM

What is at least wrong inside circle.java is that the variable angle is initialized to zero and then counted up to 360 within drawCircle, but it is never reset to zero. So the condition

while(angle <= 360.0)

will inhibit the generation of a second set of vertices. You want to use a for loop instead of a while loop here, because a for loop is useable to reset angle each time the loop is entered.

 

BTW: A principle issue you have here is that variables like angle, s and c should be local variables in the scope of the routine drawCircle, because they are used just for the inner workings of that routine. Besides that, short names like s and c may be used for loop variables (often i, j, or n are used for that), but they should never be used as member variables. Instead use expressive names.




#5223007 Alternatives to global variables / passing down references through deep "...

Posted by haegarr on 13 April 2015 - 02:06 PM


Let's make sure we are talking about the same things here, could you explain what exactly you mean by layers and levels, up and down?
The way I see it is, a layer is a scope, like the function body of main() or the scope of a class like GameState. Up is main() and down is something like statemanager.gamestate.level.player, right?
So if the Player wants to be drawn, it should not interact with the Renderer but instead the Renderer should "collect" all Renderables "below" it, or the class that owns Renderer should feed the renderer the Sprites/Renderables?

Well, let me give an example where several sub-systems are mentioned.

 

The player manipulates its input device, so input is generated. The Input sub-system fetches the raw input, preprocesses / prepares it and stores it into the game's input queue. Notice that the Input sub-system does not call any other sub-system. Somewhere at the beginning of the game loop the PlayerController looks into the input queue and determines whether current input situation matches one or more of its configurations. Notice that this can be understood so that the PlayerController accesses a service of the Input sub-system, namely its input queue, so a higher level system (PlayerController) utilizes a lower level system (Input).

 

Let's say that the PlayerController determines an input situation and hence generates / updates a motion data structure for "intention for forward walking". Notice that such intentions may also origin from another sub-system, namely AI for NPCs. The difference is that AI usually outputs such intention like "move to location X". However, both these are examples of all possible intentions that address the Movement sub-system. When that sub-system runs its update, it investigates the intentions and checks whether they are "physically" possible. If not then the intention is cleared and ignored. Notice that the lower level sub-system (Movement) does not communicate directly with a higher level one (PlayerController or AI), instead perhaps replying to AI by canceling an intention.

 

The Movement sub-system updates the movement structure to reflect the new intention (assuming that it has passed the possibility check). It updates the belonging placement data structure accordingly to the current motion.

 

Later in the game loop the animation sub-system is updated. It iterates the animated sprites and adapts their drawable, i.e. determines the valid slice from the texture and such.

 

Later on in the game loop, (visual) rendering is invoked. The upper layer of rendering investigates all sprites (notice: no distinction to PC, NPC, or whatever) and does visibility culling. Any sprite that passes is enqueued into a rendering queue, not as sprite but as drawable (this rect, this texture, this texture rect; something like that). Then the lower layer renderer is invoked. This is the renderer that actually knows about D3D, OpenGL, or whatever. It iterates the queue of drawables and generates graphic API calls from it. Notice that the lower layer of rendering is fed with low level data (close to meshes and textures) which also is ready to use (placed where needed, animated if necessary, and so on). Again no need for it to communicate with any higher sub-system.




#5222892 Alternatives to global variables / passing down references through deep "...

Posted by haegarr on 13 April 2015 - 02:34 AM

The best approach is to avoid dependencies, but obviously that can be done only within limits. In the end ever pieces of software work together to implement a solver for a problem. So in reality one simply tries to get closer to the minimum of dependencies, and to make the dependencies explicit.

 

A first step is to define modules, each one with a well defined API by which clients can call the services of the module. A module need not be a single object, but it can be beneficial if the API is provided by a single object (see e.g. the facade pattern, to some extent also the strategy pattern). The usual name for such modules is "sub-system" here in the forums.  The code snippets in the OP show that such modularization seems to be used.

 

Now if enough sub-systems are available, the problem of disorganization appears again. This can be lowered by using a layered architecture. Sub-systems collaborate with sub-systems in the same layer, they utilize sub-systems of the next lower layer, but they MUST NOT call sub-systems of upper layers (although they MAY respond to them), and they SHOULD NOT utilize sub-systems from the layer after next. Of course, in reality this is more a rule thumb than a strict law.

 

So far, the above gives a kind of top level organization, but it does not actually solve OP's problem. However, it gives a hint to us: When a lower level sub-system MUST NOT call a higher level one, then the lower level sub-system also has no clue what kind of structures the higher levels are using. That means that top down communication should be done with data structures belonging to the lower levels. 

 

The next thought is about the game loop. The game loop is actually a sequence of update calls on high level sub-systems. It is sequential, because we want clear and stable (as far as possible) world states (i.e. not an oscillation between particular sub-systems, so that the frame is delayed unpredictably!). Also from an efficiency point of view, it is beneficial to run a particular update step on all entities before passing on to the next kind of step.

 

So in such a scenario we actually have a sub-system (on the game loop level) that runs an update on all of its active entities, and for each prepares a data structure for passing the updated data, and … that's it. Because all interested sub-systems using that same pool of data structures, they already know where to find it. Of course, such a high level sub-system will utilize other sub-systems and this may mean to pass pointers around, also often only at initialization time. But the amount of pointers in each particular case is low, especially since you couple sub-systems instead of (data) objects.




#5221786 Following a path saved in svg format

Posted by haegarr on 07 April 2015 - 01:09 AM


Any ideas, suggestions, or links is appreciated.

1.) nik02 has already hinted at a valuable source: The specification, which is gracefully granted by a searchable internet.

 

2.) SVG is based on XML, and both iOS and Android have build-in support for reading XML. In both cases they allow you to outsource file reading and primary parsing, giving you a DOM (or you can try to use SAX-like callbacks) of the SVG document.

 

3.) SVG is a media sheet oriented thing. Your world is perhaps not (little details are given in the OP). So having a SVG document is most often not sufficient for such game specific purposes. IMHO you should stay away from it as a direct resource for your game. It is okay, of course, to import it into some tool that then exports a game ready resource (i.e. with correct orientation and a correct co-ordinate basis).

 

4.) If you want to stay with SVG path descriptions, then you can also consider to just manually copy the relevant parts off the SVG file and paste them into a still text based but line oriented resource file of your own. This file could contain many different paths, and it could contain all of the additional necessary stuff (including orientation, placing, scaling).

 

5.) Following the path then in a smooth time based manner is a totally different thing. It requires you to find a re-parametrization of the path so that it fits your needs. That is IMHO worth an own thread as soon as time has arrived.




#5221581 OSX not capturing keyboard input

Posted by haegarr on 06 April 2015 - 03:14 AM

AFAIK you need to ensure that your window returns YES on canBecomeKeyWindow and probably also on canBecomeMainWindow. Whether this is the default depends on the style of your window, e.g. whether it is borderless or else has a title or resizing widget. However, the fact that XCode still receives keystrokes make me think that your window is not key window / main window.




#5220505 Is my triangle intersection code correct?

Posted by haegarr on 31 March 2015 - 10:15 AM


Which I think gives you the matrix...

… almost, except that all of v1 and v2 need to be negated, or else all the other coefficients r0, r1, and t3 need to be negated. 




#5220450 Is my triangle intersection code correct?

Posted by haegarr on 31 March 2015 - 04:18 AM


I was wondering if my triangle intersection code is correct?

IMHO it isn't. It has at least 3 issues.

 

It seems that v1, v2 shall be some direction vectors that span the (infinite) plane in which the triangle lies. Then t1, t2, t3 are known to be on that plane, but ray.direction is not related to that plane. Hence computing v1, v2 must not use ray.direction. Now, with 3 positions, you can compute 6 difference vectors

    t1 - t2, t1 - t3, t2 - t3 (and their negative variants)

from which you need 2, say

   v1 := t1 - t3

   v2 := t2 - t3

 

(This is your 1st issue: Your difference vectors are not correct.)

 

With the above you can describe the plane as an infinite set of points in space, reachable by any linear combination of v1 and v2 when starting at any known point on the plane. For example

   P( a, b ) := t3 + v1 * a + v2 * b

Notice, however, that this works if and only if v1 and v2 are not co-linear itself, so you cannot find a scalar f so that v1 * f = v2. This mean, in other words, that t1, t2, and t3 must not lie on a straight line in space.

 

Another kind of definition for such a plane is to use not 2 direction vectors inside that plane, but instead the normal of the plane. Well, with 2 difference vectors you can compute the normal as their cross-product:

    n1 := v1 x v2

The the plane can be given in the so-called Hesse Normal Form:

   ( x - t3 ) . n1 == 0

It means "any point x in space belongs to that plane if the distance to the plane is zero".

 

(This is your 2nd issue: You can use the Hesse Normal Form for computing whether / where the ray hits the plane but not whether / where the ray hits the triangle, because the information about the triangle is lost in that formula. The 3rd issue is that using the Hesse Normal Form in that way is not complete.)

 

Similar to the plane above, a ray can be understood as the set of points in space when starting at a known position (the ray.origin) and going for any distance along a direction (the ray.direction):

   R( c ) := ray.origin + c * ray.direction

 

What you are now interested in is the set of points in space that are part of both the set of points of the plane and the set of points of the ray. Hence you want to look for any points with the condition

   P( a, b ) == R( c )

 

Obviously you have to solve for the 3 unknowns a, b, and c. Luckily, the above condition is given in 3 dimensional space, hence giving you 3 equations, one for each dimension:

   Px( a, b ) == Rx( c )

   Py( a, b ) == Ry( c )

   Pz( a, b ) == Rz( c )

 

For your it is sufficient to solve the above linear equation system for a and b. On the going you will see that the chance of a "division by zero" exists. If that occurs then the ray is parallel to the plane, hence there is no intersection (if the ray is distant from the plane) or infinite many intersections (if the ray is on the plane). You have to catch this, of course.

 

Considering here that the ray is not parallel to the plane, you get a unique solution for a and b, say a' and b'. However, so far you still just know that the ray hits the plane, but you want to know whether the ray hits the triangle. As said above, a and b in the plane formula allow you to reach any point on the plane. So you need to take a closer look at a' and b', especially whether they describe a point within the triangle. Because we have build v1 and v2 from the triangle's corners so that they describe 2 of its legs emanating from t3, any solution with

    a' < 0  or  a' > 1  or  b' < 0  or  b' > 1

cannot be part of the triangle. That gives the condition

   0 <= a' <= 1  and  0 <= b' <= 1

 

Thinking further of the above limits, the figure described by them is a rhombus with the desired triangle is one half of (simply chose a' == 1 and try some b' between 0 and 1 to see that). That introduces an extended condition:

  … and a' + b' <= 1

 

 

EDIT: Other ways of solution may exist as well. The above way is the straight forward one.




#5219563 Images from engine to level designer

Posted by haegarr on 27 March 2015 - 04:47 AM

IMHO: Browsing assets should ever be possible in a way without loading the assets themselves until necessary. This means that loading meta-information on assets (name, tags, creation date, author, …) and, in case of a visual presentation ability, a thumbnail and/or preview. So yes, storing thumbnails/previews besides the assets would be the right way.

 

Feedback on online changes can be done by file observation if no other channel is available.

 

In my application suite, editor and engine run in separate processes as well. Although the editor is able to produce material previews by itself, it cannot produce the entire game view. However, using the engine to produce in-editor views is just one use case. Running the game with debugging / logging / remote control is another. Hence I'm going the "old school" way of things: Communication over network sockets, even when the game engine runs as slave of the editor on the same machine. This allows for bi-directional communication, in my case of a (customized, of course) BEEP protocol. Well, this is powerful and flexible, but does not play in the "keeping things simple" category for sure!

 

BTW: I'm developing on a Mac. On Mac OS you cannot (AFAIK) render in windows of other processes, but you can exchange textures between them. Something like that does not exist for Windows (again AFAIK).




#5218407 rotate image around center

Posted by haegarr on 23 March 2015 - 01:50 AM

The center of the quad is, derived from the vertex co-ordinates, at

   xc = offx + ( 1300 + 1684 ) / 2 = offx + 1492

   yc = offy + ( 1050 + 1634 ) / 2 = offy + 1342

in local space.
 
What you want is that at the moment when invoking glBegin(GL_QUADS), the center of the quad is at (0, 0), hence requiring a translation by
   xr = -xc
   yr = -yc
 
Hence the correct code is something like
    glMatrixMode(GL_MODELVIEW);
    glPushMatrix();
    glLoadIdentity();
    glTranslatef(-(offx + 1492), -(offy + 1342), 0);
    glRotatef(XPLMGetDataf(phi), 0, 0, 1);//roll

    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
    glEnable(GL_TEXTURE_2D);
    XPLMBindTexture2d(textures[AVIDYNE_TEXTURE].texIMGID, 0);
    glColor3f(1, 1, 1);
    glTexEnvf(GL_TEXTURE_ENV,GL_TEXTURE_ENV_MODE,GL_MODULATE);
    glColor3f(1, 1, 1);
    glBegin(GL_QUADS);
    glTexCoord2f(0, 0);
    glVertex2f(offx + 1300, offy + 1050);
    glTexCoord2f(0, 1);
    glVertex2f(offx + 1300, offy + 1434);
    glTexCoord2f(1, 1);
    glVertex2f(offx + 1684, offy + 1434);
    glTexCoord2f(1, 0);
    glVertex2f(offx + 1684, offy + 1050);
    glEnd();

    glPopMatrix();

(untested code; please use just as hint)




#5218220 [UNITY] 2D Graphics Issue

Posted by haegarr on 22 March 2015 - 04:03 AM


I don't know if Unity has an option you're speaking of, but perhaps that is the Filter Mode on the texture? When I set it to Point, it changed the images from being blurred from AA to their original depiction in the tile set

That is part of what I meant, yes. In fact texture mapping allows for snapping to the nearest texel (that's short for "texture pixel"), or else interpolating linearly between the 4 surrounding texels of an addressed sub-texel position.

 

However, pixel perfect mapping also requires a 1:1 relation between the texels and the pixels on the screen, so that 1 texel covers exactly 1 pixel. But that cannot be done if the target screen has another size (when measured in pixels) and you nevertheless want to see the playground fit onto the screen. In such a case scaling is necessary, and scaling ever cancels pixel perfect mapping. That may be the reason for the observed issue.

 


I will try separating the tiles with a border, however; I do not quite understand what you mean by using the next inner pixels to create a border for tiling sprites. Why wouldn't I just want to use a transparent border for both tiles and regular sprites? I am very new to game development and don't know a lot about 2D Graphics (or really anything, for that matter), and am only marginally familiar with sprite sheets/texture atlases.

The problem shows pixels from the outside of the wanted rectangle. If you set the outside (the said border) as transparent, then those transparent pixel will be shown. That will cause gaps in the tiled background. So if you cannot avoid extra pixels to come in, what you want is that those extra pixels attract as low attention as possible. And that is reached when those extra pixels look like those already there.

 

So if you have selected the sprite image with a rectangular frame and the pixel column left to the left border of the frame is a copy of the pixel column below the left border, and the pixel column to the right is a copy of the pixel column below the right border, and similar for the top row and bottom row, you have repeated the pixels below the frame into the outside. For completeness, you should also set the 4 corners.

 

Example coming … If the original image slice has pixels like

123
456
789

then after adding the repetition it looks like

11233
11233
44566
77899
77899

Note that the inner rectangle is still the original one.

 

Non-tiling sprites usually already have a transparent background around them. So the above method would just repeat the transparent pixels, what would be equivalent to drawing a transparent border around the selection frame.




#5217822 Data structure with bool field. How to set correctly?

Posted by haegarr on 20 March 2015 - 02:45 AM


[...] I just changed bool to int and now everything works fine.

To be as safe as possible you should

a) use types uint32_t and int32_t from stdint.h as suggested by MJP

b) do data padding manually; if possible disable automatic packing, e.g.

 #pragma pack(push, 1)
 struct PixelData { … };
 #pragma pack(pop)

c) implement a compile time check like (this one offered by c++11)

 static_assert(sizeof(PixelData)==36, "PixelData not correctly sized")

_

If you are not sure at some point, you can also apply a static assertion using the offsetof operation on particular struct fields.




#5217819 Advice regarding 2D art

Posted by haegarr on 20 March 2015 - 02:13 AM


[…] Is it possible to somehow animate such images (of only slightly)? I'm thinking of something similar to how Dragon's Crown handles NPCs (the guild master in the video below @06:38):

AFAIS the animation you talk about is just warping (with small amplitudes) of a foreground image that is composed onto a background image. How to do that depends on the graphic API you want to use. For example, if using OpenGLES, you have a simple mesh as "carrier" of an image texture for drawing anyway. Using a finer mesh, e.g. with the vertices building a grid, you are able to shift the vertices around so that the grid becomes irregular. With a few meshes with slightly shifted vertices together with time based vertex position interpolation during rendering, such a warping effect can be done easily (the classic way of image warping is definitely more complex than this).






PARTNERS