Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


haegarr

Member Since 10 Oct 2005
Online Last Active Today, 12:41 PM

#5223007 Alternatives to global variables / passing down references through deep "...

Posted by haegarr on 13 April 2015 - 02:06 PM


Let's make sure we are talking about the same things here, could you explain what exactly you mean by layers and levels, up and down?
The way I see it is, a layer is a scope, like the function body of main() or the scope of a class like GameState. Up is main() and down is something like statemanager.gamestate.level.player, right?
So if the Player wants to be drawn, it should not interact with the Renderer but instead the Renderer should "collect" all Renderables "below" it, or the class that owns Renderer should feed the renderer the Sprites/Renderables?

Well, let me give an example where several sub-systems are mentioned.

 

The player manipulates its input device, so input is generated. The Input sub-system fetches the raw input, preprocesses / prepares it and stores it into the game's input queue. Notice that the Input sub-system does not call any other sub-system. Somewhere at the beginning of the game loop the PlayerController looks into the input queue and determines whether current input situation matches one or more of its configurations. Notice that this can be understood so that the PlayerController accesses a service of the Input sub-system, namely its input queue, so a higher level system (PlayerController) utilizes a lower level system (Input).

 

Let's say that the PlayerController determines an input situation and hence generates / updates a motion data structure for "intention for forward walking". Notice that such intentions may also origin from another sub-system, namely AI for NPCs. The difference is that AI usually outputs such intention like "move to location X". However, both these are examples of all possible intentions that address the Movement sub-system. When that sub-system runs its update, it investigates the intentions and checks whether they are "physically" possible. If not then the intention is cleared and ignored. Notice that the lower level sub-system (Movement) does not communicate directly with a higher level one (PlayerController or AI), instead perhaps replying to AI by canceling an intention.

 

The Movement sub-system updates the movement structure to reflect the new intention (assuming that it has passed the possibility check). It updates the belonging placement data structure accordingly to the current motion.

 

Later in the game loop the animation sub-system is updated. It iterates the animated sprites and adapts their drawable, i.e. determines the valid slice from the texture and such.

 

Later on in the game loop, (visual) rendering is invoked. The upper layer of rendering investigates all sprites (notice: no distinction to PC, NPC, or whatever) and does visibility culling. Any sprite that passes is enqueued into a rendering queue, not as sprite but as drawable (this rect, this texture, this texture rect; something like that). Then the lower layer renderer is invoked. This is the renderer that actually knows about D3D, OpenGL, or whatever. It iterates the queue of drawables and generates graphic API calls from it. Notice that the lower layer of rendering is fed with low level data (close to meshes and textures) which also is ready to use (placed where needed, animated if necessary, and so on). Again no need for it to communicate with any higher sub-system.




#5222892 Alternatives to global variables / passing down references through deep "...

Posted by haegarr on 13 April 2015 - 02:34 AM

The best approach is to avoid dependencies, but obviously that can be done only within limits. In the end ever pieces of software work together to implement a solver for a problem. So in reality one simply tries to get closer to the minimum of dependencies, and to make the dependencies explicit.

 

A first step is to define modules, each one with a well defined API by which clients can call the services of the module. A module need not be a single object, but it can be beneficial if the API is provided by a single object (see e.g. the facade pattern, to some extent also the strategy pattern). The usual name for such modules is "sub-system" here in the forums.  The code snippets in the OP show that such modularization seems to be used.

 

Now if enough sub-systems are available, the problem of disorganization appears again. This can be lowered by using a layered architecture. Sub-systems collaborate with sub-systems in the same layer, they utilize sub-systems of the next lower layer, but they MUST NOT call sub-systems of upper layers (although they MAY respond to them), and they SHOULD NOT utilize sub-systems from the layer after next. Of course, in reality this is more a rule thumb than a strict law.

 

So far, the above gives a kind of top level organization, but it does not actually solve OP's problem. However, it gives a hint to us: When a lower level sub-system MUST NOT call a higher level one, then the lower level sub-system also has no clue what kind of structures the higher levels are using. That means that top down communication should be done with data structures belonging to the lower levels. 

 

The next thought is about the game loop. The game loop is actually a sequence of update calls on high level sub-systems. It is sequential, because we want clear and stable (as far as possible) world states (i.e. not an oscillation between particular sub-systems, so that the frame is delayed unpredictably!). Also from an efficiency point of view, it is beneficial to run a particular update step on all entities before passing on to the next kind of step.

 

So in such a scenario we actually have a sub-system (on the game loop level) that runs an update on all of its active entities, and for each prepares a data structure for passing the updated data, and … that's it. Because all interested sub-systems using that same pool of data structures, they already know where to find it. Of course, such a high level sub-system will utilize other sub-systems and this may mean to pass pointers around, also often only at initialization time. But the amount of pointers in each particular case is low, especially since you couple sub-systems instead of (data) objects.




#5221786 Following a path saved in svg format

Posted by haegarr on 07 April 2015 - 01:09 AM


Any ideas, suggestions, or links is appreciated.

1.) nik02 has already hinted at a valuable source: The specification, which is gracefully granted by a searchable internet.

 

2.) SVG is based on XML, and both iOS and Android have build-in support for reading XML. In both cases they allow you to outsource file reading and primary parsing, giving you a DOM (or you can try to use SAX-like callbacks) of the SVG document.

 

3.) SVG is a media sheet oriented thing. Your world is perhaps not (little details are given in the OP). So having a SVG document is most often not sufficient for such game specific purposes. IMHO you should stay away from it as a direct resource for your game. It is okay, of course, to import it into some tool that then exports a game ready resource (i.e. with correct orientation and a correct co-ordinate basis).

 

4.) If you want to stay with SVG path descriptions, then you can also consider to just manually copy the relevant parts off the SVG file and paste them into a still text based but line oriented resource file of your own. This file could contain many different paths, and it could contain all of the additional necessary stuff (including orientation, placing, scaling).

 

5.) Following the path then in a smooth time based manner is a totally different thing. It requires you to find a re-parametrization of the path so that it fits your needs. That is IMHO worth an own thread as soon as time has arrived.




#5221581 OSX not capturing keyboard input

Posted by haegarr on 06 April 2015 - 03:14 AM

AFAIK you need to ensure that your window returns YES on canBecomeKeyWindow and probably also on canBecomeMainWindow. Whether this is the default depends on the style of your window, e.g. whether it is borderless or else has a title or resizing widget. However, the fact that XCode still receives keystrokes make me think that your window is not key window / main window.




#5220505 Is my triangle intersection code correct?

Posted by haegarr on 31 March 2015 - 10:15 AM


Which I think gives you the matrix...

… almost, except that all of v1 and v2 need to be negated, or else all the other coefficients r0, r1, and t3 need to be negated. 




#5220450 Is my triangle intersection code correct?

Posted by haegarr on 31 March 2015 - 04:18 AM


I was wondering if my triangle intersection code is correct?

IMHO it isn't. It has at least 3 issues.

 

It seems that v1, v2 shall be some direction vectors that span the (infinite) plane in which the triangle lies. Then t1, t2, t3 are known to be on that plane, but ray.direction is not related to that plane. Hence computing v1, v2 must not use ray.direction. Now, with 3 positions, you can compute 6 difference vectors

    t1 - t2, t1 - t3, t2 - t3 (and their negative variants)

from which you need 2, say

   v1 := t1 - t3

   v2 := t2 - t3

 

(This is your 1st issue: Your difference vectors are not correct.)

 

With the above you can describe the plane as an infinite set of points in space, reachable by any linear combination of v1 and v2 when starting at any known point on the plane. For example

   P( a, b ) := t3 + v1 * a + v2 * b

Notice, however, that this works if and only if v1 and v2 are not co-linear itself, so you cannot find a scalar f so that v1 * f = v2. This mean, in other words, that t1, t2, and t3 must not lie on a straight line in space.

 

Another kind of definition for such a plane is to use not 2 direction vectors inside that plane, but instead the normal of the plane. Well, with 2 difference vectors you can compute the normal as their cross-product:

    n1 := v1 x v2

The the plane can be given in the so-called Hesse Normal Form:

   ( x - t3 ) . n1 == 0

It means "any point x in space belongs to that plane if the distance to the plane is zero".

 

(This is your 2nd issue: You can use the Hesse Normal Form for computing whether / where the ray hits the plane but not whether / where the ray hits the triangle, because the information about the triangle is lost in that formula. The 3rd issue is that using the Hesse Normal Form in that way is not complete.)

 

Similar to the plane above, a ray can be understood as the set of points in space when starting at a known position (the ray.origin) and going for any distance along a direction (the ray.direction):

   R( c ) := ray.origin + c * ray.direction

 

What you are now interested in is the set of points in space that are part of both the set of points of the plane and the set of points of the ray. Hence you want to look for any points with the condition

   P( a, b ) == R( c )

 

Obviously you have to solve for the 3 unknowns a, b, and c. Luckily, the above condition is given in 3 dimensional space, hence giving you 3 equations, one for each dimension:

   Px( a, b ) == Rx( c )

   Py( a, b ) == Ry( c )

   Pz( a, b ) == Rz( c )

 

For your it is sufficient to solve the above linear equation system for a and b. On the going you will see that the chance of a "division by zero" exists. If that occurs then the ray is parallel to the plane, hence there is no intersection (if the ray is distant from the plane) or infinite many intersections (if the ray is on the plane). You have to catch this, of course.

 

Considering here that the ray is not parallel to the plane, you get a unique solution for a and b, say a' and b'. However, so far you still just know that the ray hits the plane, but you want to know whether the ray hits the triangle. As said above, a and b in the plane formula allow you to reach any point on the plane. So you need to take a closer look at a' and b', especially whether they describe a point within the triangle. Because we have build v1 and v2 from the triangle's corners so that they describe 2 of its legs emanating from t3, any solution with

    a' < 0  or  a' > 1  or  b' < 0  or  b' > 1

cannot be part of the triangle. That gives the condition

   0 <= a' <= 1  and  0 <= b' <= 1

 

Thinking further of the above limits, the figure described by them is a rhombus with the desired triangle is one half of (simply chose a' == 1 and try some b' between 0 and 1 to see that). That introduces an extended condition:

  … and a' + b' <= 1

 

 

EDIT: Other ways of solution may exist as well. The above way is the straight forward one.




#5219563 Images from engine to level designer

Posted by haegarr on 27 March 2015 - 04:47 AM

IMHO: Browsing assets should ever be possible in a way without loading the assets themselves until necessary. This means that loading meta-information on assets (name, tags, creation date, author, …) and, in case of a visual presentation ability, a thumbnail and/or preview. So yes, storing thumbnails/previews besides the assets would be the right way.

 

Feedback on online changes can be done by file observation if no other channel is available.

 

In my application suite, editor and engine run in separate processes as well. Although the editor is able to produce material previews by itself, it cannot produce the entire game view. However, using the engine to produce in-editor views is just one use case. Running the game with debugging / logging / remote control is another. Hence I'm going the "old school" way of things: Communication over network sockets, even when the game engine runs as slave of the editor on the same machine. This allows for bi-directional communication, in my case of a (customized, of course) BEEP protocol. Well, this is powerful and flexible, but does not play in the "keeping things simple" category for sure!

 

BTW: I'm developing on a Mac. On Mac OS you cannot (AFAIK) render in windows of other processes, but you can exchange textures between them. Something like that does not exist for Windows (again AFAIK).




#5218407 rotate image around center

Posted by haegarr on 23 March 2015 - 01:50 AM

The center of the quad is, derived from the vertex co-ordinates, at

   xc = offx + ( 1300 + 1684 ) / 2 = offx + 1492

   yc = offy + ( 1050 + 1634 ) / 2 = offy + 1342

in local space.
 
What you want is that at the moment when invoking glBegin(GL_QUADS), the center of the quad is at (0, 0), hence requiring a translation by
   xr = -xc
   yr = -yc
 
Hence the correct code is something like
    glMatrixMode(GL_MODELVIEW);
    glPushMatrix();
    glLoadIdentity();
    glTranslatef(-(offx + 1492), -(offy + 1342), 0);
    glRotatef(XPLMGetDataf(phi), 0, 0, 1);//roll

    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
    glEnable(GL_TEXTURE_2D);
    XPLMBindTexture2d(textures[AVIDYNE_TEXTURE].texIMGID, 0);
    glColor3f(1, 1, 1);
    glTexEnvf(GL_TEXTURE_ENV,GL_TEXTURE_ENV_MODE,GL_MODULATE);
    glColor3f(1, 1, 1);
    glBegin(GL_QUADS);
    glTexCoord2f(0, 0);
    glVertex2f(offx + 1300, offy + 1050);
    glTexCoord2f(0, 1);
    glVertex2f(offx + 1300, offy + 1434);
    glTexCoord2f(1, 1);
    glVertex2f(offx + 1684, offy + 1434);
    glTexCoord2f(1, 0);
    glVertex2f(offx + 1684, offy + 1050);
    glEnd();

    glPopMatrix();

(untested code; please use just as hint)




#5218220 [UNITY] 2D Graphics Issue

Posted by haegarr on 22 March 2015 - 04:03 AM


I don't know if Unity has an option you're speaking of, but perhaps that is the Filter Mode on the texture? When I set it to Point, it changed the images from being blurred from AA to their original depiction in the tile set

That is part of what I meant, yes. In fact texture mapping allows for snapping to the nearest texel (that's short for "texture pixel"), or else interpolating linearly between the 4 surrounding texels of an addressed sub-texel position.

 

However, pixel perfect mapping also requires a 1:1 relation between the texels and the pixels on the screen, so that 1 texel covers exactly 1 pixel. But that cannot be done if the target screen has another size (when measured in pixels) and you nevertheless want to see the playground fit onto the screen. In such a case scaling is necessary, and scaling ever cancels pixel perfect mapping. That may be the reason for the observed issue.

 


I will try separating the tiles with a border, however; I do not quite understand what you mean by using the next inner pixels to create a border for tiling sprites. Why wouldn't I just want to use a transparent border for both tiles and regular sprites? I am very new to game development and don't know a lot about 2D Graphics (or really anything, for that matter), and am only marginally familiar with sprite sheets/texture atlases.

The problem shows pixels from the outside of the wanted rectangle. If you set the outside (the said border) as transparent, then those transparent pixel will be shown. That will cause gaps in the tiled background. So if you cannot avoid extra pixels to come in, what you want is that those extra pixels attract as low attention as possible. And that is reached when those extra pixels look like those already there.

 

So if you have selected the sprite image with a rectangular frame and the pixel column left to the left border of the frame is a copy of the pixel column below the left border, and the pixel column to the right is a copy of the pixel column below the right border, and similar for the top row and bottom row, you have repeated the pixels below the frame into the outside. For completeness, you should also set the 4 corners.

 

Example coming … If the original image slice has pixels like

123
456
789

then after adding the repetition it looks like

11233
11233
44566
77899
77899

Note that the inner rectangle is still the original one.

 

Non-tiling sprites usually already have a transparent background around them. So the above method would just repeat the transparent pixels, what would be equivalent to drawing a transparent border around the selection frame.




#5217822 Data structure with bool field. How to set correctly?

Posted by haegarr on 20 March 2015 - 02:45 AM


[...] I just changed bool to int and now everything works fine.

To be as safe as possible you should

a) use types uint32_t and int32_t from stdint.h as suggested by MJP

b) do data padding manually; if possible disable automatic packing, e.g.

 #pragma pack(push, 1)
 struct PixelData { … };
 #pragma pack(pop)

c) implement a compile time check like (this one offered by c++11)

 static_assert(sizeof(PixelData)==36, "PixelData not correctly sized")

_

If you are not sure at some point, you can also apply a static assertion using the offsetof operation on particular struct fields.




#5217819 Advice regarding 2D art

Posted by haegarr on 20 March 2015 - 02:13 AM


[…] Is it possible to somehow animate such images (of only slightly)? I'm thinking of something similar to how Dragon's Crown handles NPCs (the guild master in the video below @06:38):

AFAIS the animation you talk about is just warping (with small amplitudes) of a foreground image that is composed onto a background image. How to do that depends on the graphic API you want to use. For example, if using OpenGLES, you have a simple mesh as "carrier" of an image texture for drawing anyway. Using a finer mesh, e.g. with the vertices building a grid, you are able to shift the vertices around so that the grid becomes irregular. With a few meshes with slightly shifted vertices together with time based vertex position interpolation during rendering, such a warping effect can be done easily (the classic way of image warping is definitely more complex than this).




#5217295 Entity,Components: Issue in immediate refresh

Posted by haegarr on 18 March 2015 - 02:33 AM


[…] If I add a component, I would like that entity to be processed by a new system immediately

There is the following reasoning against the generalization of such an approach. It does not concretely answer your question, but it may make you rethink...

 

A game loop usually defines a sequence of updates on sub-systems in a fixed order. See for example this book excerpt over there at Gamasutra. This is done to get control over things that happen concurrently in reality, but cannot be simulated concurrently. If you don't do it that way, a single change may cause a cascade of sub-sequent changes that (a) may not settle in appropriate time, hence causing a delay and stutter to the frame rate, or (b) is canceled at some point, leaving the world in an incorrect state.

 

Obviously it is possible then that a sub-system updated early in the game loop is allowed to make changes that has an impact only on sub-systems later in the game loop. For example, the input sub-system is traditionally the very first sub-system updated. The player control, using input in its update, can be done very soon after that. If the player control detects a "fire weapon" input situation and the belonging action is not blocked due to some reason, it can cause the instantiation of a BulletShot entity immediately, because the entity's updates, shot's collision detection, and so on will be done later in the game loop. However, if the collision sub-system detects an entering into a trigger volume, it must not cause the spawning of a new enemy immediately, because things like animation and physics are already done, and adding a new entity at that time may draw the previous results partly meaningless.

 

So you need to clarify whether the issue explained in the 1st paragraph above may occur, and if so you have to judge on whether they may harm the experience with your game, and if so you have to decide which way to go. This thinking has to be done for each phase in the game loop where you want immediate changes. 

 

Obviously the easiest way to deal with that problem is to defer all changes. Unfortunately I'm not familiar enough with Artemis to know what can be done. My attempts would be to have both immediate and deferred handling. If the framework does not provide it (and it is fine to use otherwise), then patching it may be an option!?




#5215244 coordinates transformation

Posted by haegarr on 08 March 2015 - 02:47 AM

In case that you really want to do more than calculating the reflection vector, and only translation and rotation but not scaling plays a role:

 

All your vectors are given in a space described by the identity matrix, i.e. with

   o = [ 0 0 0 1 ]t

   x = [ 1 0 0 0 ]t

   y = [ 0 1 0 0 ]t

   z = [ 0 0 1 0 ]t

 

The other space is described by

   o', x', y', z'

 

What you want is a transform M that maps the original space onto the given one:

   o' = M * o

   x' = M * x

   y' = M * y

   z' = M * z

 

Because each of oxyz has a single 1 and otherwise 0s as elements, and the 1 is stored at different rows, each of those mappings above just pick a single column from M:

   M * x = M * [ 1 0 0 0 ]t = M1st column

   M * y = M * [ 0 1 0 0 ]t = M2nd column

   M * z = M * [ 0 0 1 0 ]t = M3rd column

   M * o = M * [ 0 0 0 1 ]t = M4th column

 

In other words, the transform M is the matrix

   M := [ xyzo' ]

 

If you want to transform a vector v from the original into the mapped space, you have to apply

   v' = M * v

If you want to transform a vector v' from the mapped into the original space, you have to apply

   v = M-1 * v'




#5215135 Ugly, can this be done easier?

Posted by haegarr on 07 March 2015 - 08:22 AM

To demonstrate, something like

static const D3DCOLOR selectedCol = D3DCOLOR_XRGB(255, 255, 0);
static const D3DCOLOR defCol = D3DCOLOR_XRGB(255, 255, 255);

switch(mCurrentMenu)
{
	case MENU_MAIN:
	{
		static const char* const items[]
			= {
				"Resume game",
				"New game",
				"Options",
				"Leaderboard",
				"Quit game",
				nullptr
			};
		uint idxItem = mGameStarted ? 0 : 1;
		while (items[idxItem]!=nullptr) {
			mD3d.mD3dFont.PrintLarge(items[idxItem], startMenuX, startMenuY + idxItem * offsetMenuItem, mCurrentMenuItem==idxItem ? selectedCol : defCol);
                        idxItem++;
		}
	} break;
}

Because I don't know what type the first parameter of PrintLarge actually expects, I've used (undesired) char* here for simplicity.




#5214903 How to implemnt lighting in 2d sdl game?

Posted by haegarr on 06 March 2015 - 02:33 AM


I do not think this answers his question since there is no concept of a camera in SDL.

Assuming that you are using SDL_BlitSurface, then have a look at the SDL_SetAlpha function especially with its flag parameter set to SDL_SRCALPHA. This allows for alpha blending. Then use an image with an alpha channel, totally black but with full alpha in a central circle, falling off to 30% or so towards the borders. Set the surface to alpha blending, and blit it so that the destination rectangle matches the current view area. That is already what Orymus3 suggested, but expressed using SDL terms.

 


I drawed a black square on top of the screen with alpha blending so the game looked like it was dark, but now I have no idea on how to make things visivle and add those "lights".

Notice that the above trick does not lighting but darkening! The state of the destination surface before is a fully lit scene, and then only those parts of the scene that should not be lit are darkened.






PARTNERS