Jump to content

  • Log In with Google      Sign In   
  • Create Account


haegarr

Member Since 10 Oct 2005
Offline Last Active Today, 03:08 PM
*****

#5136746 Input handling in ECS system

Posted by haegarr on 06 March 2014 - 03:58 AM

Using low level sub-systems by high level sub-systems is okay because it is the natural order when layering. Input is a low level system. So I don't see a restriction in using it by e.g. your core game sub-system.

 

The solution I'm fine with so far is that Input as a sub-system just gathers and maintains low level input and provides a plug-in mechanism to allow other sub-systems to investigate the queued input. A sub-system that is interested in input generates an InputHandler. An InputHandler is responsible to detect situations from queued input and usually to translate it into commands (as a form of high level input) accordingly to the input configuration (i.e. a mapping that defines which input situation should generate which command).

 

At least higher level sub-systems need to be processed in a defined order anyway. I do not propagate input by messaging but let the sub-systems run their InputHandler as needed inside their respective update method.




#5136269 Code lines number influenced by formatting style.

Posted by haegarr on 04 March 2014 - 03:44 AM

Just to mention: If one wants to use SLOCs as a measure for programmer activity (and I do not recommend so), then comments should IMHO be included. Comments are an important part, and commenting definitely costs time.




#5135832 Is sprite transform applied before or after cropping?

Posted by haegarr on 02 March 2014 - 05:20 AM

Drawing is done along the scanlines of the framebuffer. Which scanlines and also which span on each of the scanlines are touched depends on the sprite's geometry, i.e. the vertices' positions after transformation to screen space.

 

For each pixel on a span the belonging texture co-ordinates are computed and used to sample the texture, considering the set filter method (i.e. bi-linear). The resulting color is then used for the pixel.

 

The mapping between the pixel co-ordinates and the texture co-ordinates can be understood as computing the span in the texture that corresponds to the span in the framebuffer. So, cropping in the usual meaning of image processing is not done at all, although the effect is the same.

 

I hope I've understood your question correctly.




#5134993 Converting password to number

Posted by haegarr on 27 February 2014 - 04:13 AM

The one problem arises because you don't consider the place of the particular characters when summing them up. The plus operator is commutative, that means

   1 + 2 == 2 + 1

and hence summing up the ASCII values of "Paul" gives the same value as summing up the ASCII values of "luaP".

 

The other problem arises because a specific sum can be yielded in by several combinations of arguments. For example

   3 + 6 == 4 + 5

so summing up the ASCII values of "Lisa" gives the same value as summing up the ASCII values of "Bart" (although I haven't proved that).

 

Decent string hash functions consider the location of where the characters are in the string by multiplying the sum so far with a constant. However, finding such a constant so that the result have a vanishing probability of collision is not easy.

 

I suggest you to not try your own solution but to use an existing hash function. There are several of them available from the internet. For example, the Fowler/Noll/Vo version 1 alternative (or FNV-1a for short) is available for 32, 64, and 128 bit hash values, and demonstrates what I mean above. It is simple and can be implemented without hassle. 

 

Another point is that hashing alone is not very secure. For a password hash one usually wants that the reverse step, e.g. computing the unknown password from the known hash, is not possible with a reasonable effort. Hence there are hash functions out there that are developed with the demand to not being reversible, so-called cryptographic hashes. Well known candidates are md5 and the SHA family. Some of them are known to have weaknesses.

 

For passwords you should consider to use such secure hash functions. 




#5134079 Implementing an Entity Component System in C++

Posted by haegarr on 24 February 2014 - 05:54 AM

The following is my personal opinion, of course...

 


I like my architecture to be clear, not to be a mix of several design patterns which are supposed to be "contestants" (e.g. Map would be a traditional class whereas Character would be an entity). It's quite confusing I think.

A big No! The toolbox of a programmer should have more tools than a hammer. You ever should use the solution that is appropriate for the given problem.

 


With the OOP approach, the "skeleton" of the game is well defined by the classes: you read the Character class and you know everything about it. Whereas with the ECS approach, a Character is divided in several files (1 entity, x components and x systems) and you don't know where it all gets together. However, I agree the code is more modulable with an ECS.

Look at the traditional scene graph like in OpenSG. IMHO it is a bad solution (is not meant disrespectful) just because they tried to fit all into the hierarchy of the graph structure. Look at the mega super class approach where tons of functionality is shifted into the base class just for the case that the one or other heir will need the one or other functionality. Look at the spaghetti inheritance approach which is problematic to maintain n the long run. (BTW, the latter two are the reason why CES is so popular these days.)

 

For a simple game the bad effects will not be so visible, and things can still be maintained. The problems will arise more and more when the game grows. Of course, also CES is not the ultimate solution for such problems, but it is better suited than the other both approaches mentioned above. And, as said, also let composition not be your sole hammer; inheritance has still its justification.




#5133881 Implementing an Entity Component System in C++

Posted by haegarr on 23 February 2014 - 10:10 AM

Don't make the mistake to fit all and everything into the CES schema. It is as wrong as trying to fit all into a scene graph. Notice that another word for entity in the given meaning is "game object".

 

You haven't said what a Map is precisely. Probably it is a game level, i.e. a description of which game objects are to be loaded (perhaps when they should be loaded), and how to parametrize them if instancing game objects is implemented this way. IMHO such a thing isn't a game object at all.

 

Another mistake would be to not allow interconnection between entities. Parenting / forward kinematics / 3rd person camera attachment is just one thing that expresses interconnection. E.g. a weapon is its own game object, regardless whether it lies on the ground or is held by a character. Interconnections between components is also necessary. E.g. the Placement component stores the world position and orientation of an entity, and several other components depend on this.

 

 

EDIT: Oh, I see that I just second phil_t's posts. However, it's nevertheless my opinion.




#5133823 data compression

Posted by haegarr on 23 February 2014 - 05:40 AM

Information exists on some kind of storage (being it brain memory, a piece of paper, a hard disk, …). Data as I understand it here is a representation of information. Data compression is a way to choose another (comparable) representation with less amount of space. If the compression is lossy, then some of the information isn't stored in the result compared to the original data. But the original data may still be stored elsewhere and hence not globally lost. However, the instance that has the compressed data as the only source of information about the given topic simply knows not so much as the instance having the original data did. From its point of view some information is lost if it is aware that lossy compression has occurred. At the end, if all storages with the original data are erased (the person is dead, the paper burned, the hard disk demagnetized, …), some information is lost definitely.

 

I do not discuss esoteric things like time traveling here. Information is something that can be retrieved, or else it has no meaning.




#5133817 data compression

Posted by haegarr on 23 February 2014 - 05:06 AM

haegarr

"information can be destroyed simply by leaving it out"

 

you can't prove that, was there information before you were born, is there going to be information after you die?

 

Hence my question whether there is a philosophical background in your question! My post states that the given answer belongs to data compression (as found in computer science), and that is done on a defined set of information give to a consumer. The consumer is then the instance that assess the received information.




#5133802 data compression

Posted by haegarr on 23 February 2014 - 04:18 AM


if information cannot be destroyed nor created then what does really means data compression?

Is there a philosophical background in this question? With respect to data compression, information can be destroyed simply by leaving it out, so that the receiver cannot reconstruct it. Creating information is meaningless when assuming that the original information was sufficient and complete. Creating more information from a given fixed knowledge base means to generate redundancy what is meaningless to the amount of information.

 

Lossless compression means to encode information in a way that needs less data in the space dimension to represent the information as before. It just reduces redundancy. Lossy compression accepts the loss of information that is deemed to be unimportant.




#5132902 Using Texture Arrays

Posted by haegarr on 20 February 2014 - 02:18 AM

Only few details are known about the low level architecture of modern GPUs, so I may be wrong with the following.

 

Giving each texture unit as can be accessed in OpenGL its own cache is probably not done, because it would mean to let cache memory lounge around unused if less than the total number of texture units is in use. I think that no manufacturer will waste performance this way. Caching is usually be done with the memory address as key, which is itself independent on the unit.

 

Texels are rearranged when placed in GPU main memory. This is because in general there is no preferred direction (e.g. neither rows nor columns are preferred) when being sampled. So the texels are stored in tiles (2D) or blocks (3D), and these define the content that will be cached. I assume that a 2D texture array is still stored in tiles.

 

In summary I think that there is no penalty in using a texture array here. You could measure both ways, but that will give you a hint only for your own GPUs, of course.




#5131325 Auto update systems - yes or no

Posted by haegarr on 14 February 2014 - 11:27 AM

I don't like things happening silently. If there is an update available a non-intrusive notification is fine (i.e. not a modal alert but a badge if possible). Also if possible, the update mechanism should be integrated into the software, so that a separate download by a browser would not be necessary. Just my 2 Cents.




#5130799 what perspectives come into play

Posted by haegarr on 12 February 2014 - 07:22 AM

The correct word is projection. Here it means the way how to map the 3 dimensional space (the scene, e.g. the world space in front of the camera) into the 2 dimensional space (the screen, e.g. the camera's film).

 

If the projecting rays are all in parallel, it is called a parallel projection. If the rays pass through a single point, it is called a perspective projection.

 

If, in a parallel projection, the rays hit the view plane in a 90° angle, it is especially called an orthographic projection. If they instead hit in another angle, it is called an oblique projection.

 

Hence, "ortho" is (the short name for) a special kind of parallel projections and different from a perspective projection.

 

How to compute a perspective projection depends on the circumstances. The common thing is that the lateral dimensions are altered with the depth dimension, i.e.

   x' = f( x, z ) and y' = f( y, z )

There is no "correct" or "wrong" way per se.
 
Perspective projection and orthographic projection are both directly supported by graphics APIs; and both are used frequently in games. Pictorial orthographic projections (the axonometric projections from which the (pseudo) isometric and the dimetric projections is often used in 2D games) are possible, too. Oblique projections are also possible, but AFAIK seldom used for games; their original use case is technical drawing.
 
Not possible are in general all projections that cannot be realized by a homogenous matrix multiplication, like as to my knowledge e.g.
* inverse perspective projection, or
* projections that do not use a flat view plane.
However, also those can simply be used with ray-tracing.
 
 
Some literature:
- Wikipedia's page about Graphical projection
- an article copy here on GDnet: Axonometric Projections - A Technical Overview
- the internet search engine of your choice



#5129810 Quat rotation confusion

Posted by haegarr on 08 February 2014 - 06:19 AM

I am very very happy to say that taking your advice worked and completely fixed my problems - now I can use a quaternion in general to represent an objects rotation or the camera's rotation - doesn't matter.. The forward, up, and side vectors are now all pointing correctly and objects rotate how I expect them to.

Hurray! smile.png

 

So.. I am still not 100 percent sure I know which terminology I should be using

Mathematic defines that in a matrix product (look at vectors as a matrix where one of the both dimensions is set to length 1) the count of columns on the left matrix must be equal to the count of rows on the right. Hence you can write a matrix product of a 4x4 matrix and a 4 element vector either so

| a  e  i  m |   | q |
| b  f  j  n | * | r |
| c  g  k  o |   | s |
| d  h  l  p |   | t |

or else so

                 | a  b  c  d |
| q  r  s  t | * | e  f  g  h |
                 | i  j  k  l |
                 | m  n  o  p |

Notice that in the 1st solution the vector is on the right (typical for e.g. OpenGL), and it is written as a column. So we multiply 4 columns (in the matrix) by 4 rows (in the vector), which is mathematically okay. This is the use of column vectors, as you did.

 

Notice that in the 2nd solution the vector is on the left (typical for e.g. D3D), and it is written as a row. So we multiply 4 columns (in the vector) by 4 rows (in the matrix), which is mathematically okay. This is the use of row vectors.

 
Notice at last that I've used the same letters in both solutions, and as such each letter in the one solution mean the same value as in the other, but I have re-arranged them. This is due to the fact that a mathematical correspondence exists by the mean of the transpose operation, which converts between column and row vectors as follows:
    ( M * v )t == vt * Mt
    ( v * M )t == Mt * vt
and at least
    ( Mt )t == M



#5129548 Quat rotation confusion

Posted by haegarr on 07 February 2014 - 05:19 AM


I am using row vector matrices ...

Are you sure? Because if you use row vectors, then you need to compute

screenSpace = positionVec * scaleMat * rotateMat * translateMat * viewMat * projMat  

instead. Perhaps you are confusing the meaning of "row / column vectors" and the memory layout "row / column major"? The former one is based on the mathematical prescription of how to compute a matrix product, while the latter one means how the 2D arrangement of elements in a matrix are mapped into the 1D computer memory.

 


… In the line there I set the first, second, and third rows of the matrix to the right, up, and target vectors respectively. This is how you are supposed to build the rotation matrix for a quaternion isnt it? Am I supposed to be setting the columns of the matrix rather than the rows?

The names side (or right), up, and forward (or target, in your case) vectors are used for specific directions, in particular the principal positive axes of a local co-ordinate system. They are to be set as columns if using column vectors, and they need to be set as rows if using row vectors. Not doing so so means to store in fact the transpose and hence inverse rotation. Internally the setRow and setColumn routines need to consider whether row major or else column major storage convention is chosen, and store the values accordingly. Not doing it correctly again means a transpose and hence inverse rotation.

 


the strange thing is that it has worked this way for a long time now and only when I decided to switch things around to Quaternions has it given me trouble..

It is important to go disciplined through this stuff, or else things get messy perhaps elsewhere. Mistakes need not be visible immediately.

 
Well, that is life of a programmer ... yesterday all worked well, today some shit happens surprisingly ;) Even worse if we originally wanted to make things just better...



#5129528 Quat rotation confusion

Posted by haegarr on 07 February 2014 - 03:15 AM

In principle it should play no role whether the quaternion is for a game entity or the camera, as long as the camera is used like an object in the world. But exactly this is not clear from what the OP shows. My current problems in understanding it are the following:

 

According to this line

screenSpace = projMat * camMat * translateMat * rotateMat * scaleMat * positionVec

I assume that you're using column vector matrices. The quaternion-to-matrix conversion sets the rows of the matrix to the side, up, and forward vectors.

retMat.setRow(NSVec4Df(getRightVec(), 0.0f), 0);
retMat.setRow(NSVec4Df(getUpVec(), 0.0f), 1);
retMat.setRow(NSVec4Df(getTargetVec(), 0.0f), 2);

This hints at the usage of row vector matrices. Are you sure that you use it correctly?

 

Moreover, in 

screenSpace = projMat * camMat * translateMat * rotateMat * scaleMat * positionVec

the camMat need to be the viewMat == the inverse camMat. Have you considered this? The following line

camMat = rotateMat * translateMat * camOrigin

let's me assume that camMat should in fact be the viewMat (it is computed in reverse order), but it isn't clear whether you also invert rotateMat and translateMat accordingly, because what you actually need to apply is the rule

 

    ( T * R )-1 = R-1 * T-1

 

The fact that the inverse rotation is identical to the transposed rotation may explain why you set the row vectors in the matrices instead of the column vectors. However, the quaternion class is not especially for the camera but for general use. As such you should not have hidden side effects in it. Moreover, there is still nothing said about the inversion of the translation.

 

Even if you actually do all this mathematically right (I haven't checked the formulas whether they match), naming it the way you're doing it is confusing, because you violate a common naming convention. As you can see from my post, not following the convention brings up many questions just to make sure we're speaking about the same.






PARTNERS