I don't like things happening silently. If there is an update available a non-intrusive notification is fine (i.e. not a modal alert but a badge if possible). Also if possible, the update mechanism should be integrated into the software, so that a separate download by a browser would not be necessary. Just my 2 Cents.
haegarrMember Since 10 Oct 2005
Online Last Active Today, 02:51 AM
- Group Crossbones+
- Active Posts 3,478
- Profile Views 9,008
- Submitted Links 0
- Member Title Member
- Age Age Unknown
- Birthday Birthday Unknown
Posted by haegarr on 12 February 2014 - 07:22 AM
The correct word is projection. Here it means the way how to map the 3 dimensional space (the scene, e.g. the world space in front of the camera) into the 2 dimensional space (the screen, e.g. the camera's film).
If the projecting rays are all in parallel, it is called a parallel projection. If the rays pass through a single point, it is called a perspective projection.
If, in a parallel projection, the rays hit the view plane in a 90° angle, it is especially called an orthographic projection. If they instead hit in another angle, it is called an oblique projection.
Hence, "ortho" is (the short name for) a special kind of parallel projections and different from a perspective projection.
How to compute a perspective projection depends on the circumstances. The common thing is that the lateral dimensions are altered with the depth dimension, i.e.
x' = f( x, z ) and y' = f( y, z )
Posted by haegarr on 08 February 2014 - 06:19 AM
I am very very happy to say that taking your advice worked and completely fixed my problems - now I can use a quaternion in general to represent an objects rotation or the camera's rotation - doesn't matter.. The forward, up, and side vectors are now all pointing correctly and objects rotate how I expect them to.
So.. I am still not 100 percent sure I know which terminology I should be using
Mathematic defines that in a matrix product (look at vectors as a matrix where one of the both dimensions is set to length 1) the count of columns on the left matrix must be equal to the count of rows on the right. Hence you can write a matrix product of a 4x4 matrix and a 4 element vector either so
| a e i m | | q | | b f j n | * | r | | c g k o | | s | | d h l p | | t |
or else so
| a b c d | | q r s t | * | e f g h | | i j k l | | m n o p |
Notice that in the 1st solution the vector is on the right (typical for e.g. OpenGL), and it is written as a column. So we multiply 4 columns (in the matrix) by 4 rows (in the vector), which is mathematically okay. This is the use of column vectors, as you did.
Notice that in the 2nd solution the vector is on the left (typical for e.g. D3D), and it is written as a row. So we multiply 4 columns (in the vector) by 4 rows (in the matrix), which is mathematically okay. This is the use of row vectors.
Posted by haegarr on 07 February 2014 - 05:19 AM
I am using row vector matrices ...
Are you sure? Because if you use row vectors, then you need to compute
screenSpace = positionVec * scaleMat * rotateMat * translateMat * viewMat * projMat
instead. Perhaps you are confusing the meaning of "row / column vectors" and the memory layout "row / column major"? The former one is based on the mathematical prescription of how to compute a matrix product, while the latter one means how the 2D arrangement of elements in a matrix are mapped into the 1D computer memory.
… In the line there I set the first, second, and third rows of the matrix to the right, up, and target vectors respectively. This is how you are supposed to build the rotation matrix for a quaternion isnt it? Am I supposed to be setting the columns of the matrix rather than the rows?
The names side (or right), up, and forward (or target, in your case) vectors are used for specific directions, in particular the principal positive axes of a local co-ordinate system. They are to be set as columns if using column vectors, and they need to be set as rows if using row vectors. Not doing so so means to store in fact the transpose and hence inverse rotation. Internally the setRow and setColumn routines need to consider whether row major or else column major storage convention is chosen, and store the values accordingly. Not doing it correctly again means a transpose and hence inverse rotation.
the strange thing is that it has worked this way for a long time now and only when I decided to switch things around to Quaternions has it given me trouble..
It is important to go disciplined through this stuff, or else things get messy perhaps elsewhere. Mistakes need not be visible immediately.
Posted by haegarr on 07 February 2014 - 03:15 AM
In principle it should play no role whether the quaternion is for a game entity or the camera, as long as the camera is used like an object in the world. But exactly this is not clear from what the OP shows. My current problems in understanding it are the following:
According to this line
screenSpace = projMat * camMat * translateMat * rotateMat * scaleMat * positionVec
I assume that you're using column vector matrices. The quaternion-to-matrix conversion sets the rows of the matrix to the side, up, and forward vectors.
retMat.setRow(NSVec4Df(getRightVec(), 0.0f), 0); retMat.setRow(NSVec4Df(getUpVec(), 0.0f), 1); retMat.setRow(NSVec4Df(getTargetVec(), 0.0f), 2);
This hints at the usage of row vector matrices. Are you sure that you use it correctly?
screenSpace = projMat * camMat * translateMat * rotateMat * scaleMat * positionVec
the camMat need to be the viewMat == the inverse camMat. Have you considered this? The following line
camMat = rotateMat * translateMat * camOrigin
let's me assume that camMat should in fact be the viewMat (it is computed in reverse order), but it isn't clear whether you also invert rotateMat and translateMat accordingly, because what you actually need to apply is the rule
( T * R )-1 = R-1 * T-1
The fact that the inverse rotation is identical to the transposed rotation may explain why you set the row vectors in the matrices instead of the column vectors. However, the quaternion class is not especially for the camera but for general use. As such you should not have hidden side effects in it. Moreover, there is still nothing said about the inversion of the translation.
Even if you actually do all this mathematically right (I haven't checked the formulas whether they match), naming it the way you're doing it is confusing, because you violate a common naming convention. As you can see from my post, not following the convention brings up many questions just to make sure we're speaking about the same.
Posted by haegarr on 06 February 2014 - 03:56 AM
This depends on the kind of projection. In raytracing the common kinds perspective and orthogonal projection are possible, but also less common ones like oblique projections, and even exotic things like panoramic projections. Furthermore is it possible to generate the rays in view space and to transform it into world space, or else to generate them in world space directly; the former one is usually easier. You further need to define along which of the 6 principle axis directions you want to be the forward looking direction. Oh, and over-sampling and jitter plays a role, too (forgotten these in my first write). Without defining such things, the possibilities are too many for a detailed description here.
In general you have 2 things to consider: The eye point, if the projection uses one, and the sample point on the view plane. From these you can define the both parts required for a ray r, an "origin" r0 and a direction rd:
r( t ) := r0 + t * rd
The eye point (used by e.g. perspective projection) defines a point trough which each ray passes, and as such gives a fine origin r0 for the rays. In view space the eye point is (0,0,0), while in world space the eye point is defined as the positional part of the observer's world transform.
Parallel projections have rays that do not pass through a common point, but they make a defined angle with the view plane. The orthogonal projection, for example, makes a 90° angle with the view plane (in other words, the direction rd is equal to the normal (or perhaps its inverse) of the view plane at the position of the sample).
EDIT: ((Forgotten to tell about sampling)) The view is usually rectangular with a a resolution of (Vx, Vy) many pixels. If the view plane is also rectangular, then is has a size of say (w,h). (If the view plane is not rectangular, then you need to map it to a rectangle.) Without over-sampling or jittering, each ray passes through a center of a rectangular portion of the view plane. There are (Vx, Vy) many such portions, each one being w/Vx by h/Vy in size. To find the centers, you need to add half of that size to the sample positions. Hence the sample point in local view plane co-ordinates are at
( -w/2 + w/2/Vx + i * w/Vx , -h/2 + h/2/Vy + j * h/Vy ) for 0 <= i < Vx and 0 <= j < Vy
assuming a symmetric viewport.
Posted by haegarr on 04 February 2014 - 01:53 PM
Events related to a window should be handled by a window handler or a similar instance because that isn't input that will affect the behavior of entities within the game (like a player that is moving). Ok, if you're closing the window, then the game is affected but i think this would also be part of the window handler. This instance has to make sure that this kind of event is propagated to a kind of main system that is able to shutdown all parts of an engine, or a game in more general.
It depends on where you draw the border. I've running a thread to gather input, to translate it from OS to engine events, and push them into a queue. The thread that runs the game loop reads the queue when the InputServices is updated. This services instance has the ability to plug-in input listeners, so to say. Entity components like the PlayerController puts to use this, as well as sub-systems can do so. E.g. the Graphics sub-system can install an input listener to get window size changes. And the loop itself could install a listener to detect quit conditions if wanted so, or even better the GameServices instance should do so.
EDIT: Notice that I do not consume events in the game loop. Instead, events become deprecated at some time and will be removed then. This allows several units to detect relevant events. This is another use case for the timestamp here.
Posted by haegarr on 04 February 2014 - 11:30 AM
Not all input is passed by the OS' windowing system anyhow. Advanced USB input devices may need to be driven by using HID or a similar API.
When gathering input it is generally important to not store states (only) but transitions where appropriate, and to store a timestamp of occurrence with it. This allows for temporal affiliation and combo detection. It is even more important to not miss input, so you perhaps need a separate thread for input gathering (depending on how the APIs provide the input).
Some events you need to get are actually bound to a window, like window resizing and closing. Such event must not be ignored when running in windowed mode, of course.
Posted by haegarr on 01 February 2014 - 05:08 AM
Notice that even your 2 level sort is still somewhat arbitrary in case that a combination of type and priority does not prevent other instances of the same combination being attached. E.g. if 2 exclusive effects with the same priority are attached, which one wins? What happens if resorting isn't robust, hence causes their order to reverse sometimes?
Letting an exclusive effect hide other effects of the same priority is relatively simple. Because higher priority comes first after sorting, and exclusive effects come before others (of the same priority) all you need to do is to check whether the first effect for a given priority is an exclusive one, and if so, jump straight to the next priority after processing the current effect.
Notice that you should not work with listening here. That is because you should assess the consequences of effects after all effects are applied. If not doing so, a damage effect may cause the N/PC to die just because the pending healing effect was not processed yet.
Notice further that some effects work transitionally while other work statefully. E.g. a HealthBoost adds 100 HP when being attached and subtracts them when being detached, but its update() method does nothing in-between attachment and detachment. A DiseaseEffect continuously subtracts some HPs instead. But that is hidden in the particular update() methods.
As crancran already noted, using some kind of set would do the trick.
I totally disagree with effects having a, well, effect on the input sub-system. First of, the input sub-system is low level compared to the effect system. As such it should not need to know about effects at all. Second, if you think about feedback to the player, wouldn't it be helpful if the player actually sees that the N/PC wants to move but cannot? In fact, effects (in this sense) should work on the character's animation / state machine level, but not on the input level.
Posted by haegarr on 26 January 2014 - 11:57 AM
Setting char *Text instead of string Text still crashes the console window.
For the case that your console application does not load the correct libraries: Have you considered to use one of the suggested standardized functions (sprintf, snprintf) instead?
However I still wanna be able to store the data in a string rather than a char.
The very first answer above already told the solution. In case that your favorite search engine has a defect … something like the following should do (but is written down untested):
stringstream myHexString; myHexString << hex << number; string str = myHexString.str();
Posted by haegarr on 26 January 2014 - 10:15 AM
But the thing is that all those methods still involve the console window.
Err, no! Each of the above mentioned
* snprintf() ((which is just a variant of sprintf() so that the amount of characters written can be limited))
use a string or char as target; none of them involve the console window (I assume you mean the implicit use of stdout / stderr here).
Posted by haegarr on 26 January 2014 - 06:01 AM
When one really restricts self to structured code, instructions like continue and break (where the latter is used in C/C++ switch statements, too) have to be avoided, and also return has (with the exception of being the very last instruction of a function body, but then only for the purpose to name the result variable). This is because all of these instructions terminate execution of a block somewhere between.
We could live without them, that is true. But avoiding them will introduce more hassle in some situations than using them would do: You may introduce more variables to hold resulting or temporary values; you may need to implement more or more complex conditional clauses. You may get less performance because the additionally introduced (and, let's say it: unnecessary) jumping.
As game programmers we're after performance. Some things are easy to get: Shortening function execution by early returning, serving the far more probable branch in the "if" instead of the "else" block to not break CPU pipelining, and similar things. I do not count such things to premature optimization. Compiler optimization can do some things, but usually it does not hurt to support it (I hope ;))
IMHO the switch statement falls into the same category. It isn't a necessity, but it is helpful in some situations and hence should be considered to be used. My 2 Cents; of course do I respect your current dislike for it.
BTW: I've started programming in 1983, so I think cannot be counted to a young generation anyhow. OTOH I'm not a nerd w.r.t. todays compilers and CPU utilization and hence only scratch the surface.
Posted by haegarr on 22 January 2014 - 09:05 AM
1.) Why is nearly all declared or computed twice?
2.) Why are variable names chosen so that they are not distinguishable from others?
3.) Why are variable names for semantically the same thing so different as for cubepos1 and cube2position?
4.) Why the redundant computation here (2nd line)?
cubepos1 = cubelocation1 + cubetranslation1
cubeoldpos = cubepos1 - cubetranslation1
5.) The dot-product gives a negative result if the angle between the 2 arguments is greater than 90°. The acos gives values in range [0,pi]. This means that your computation makes no difference between rotation to the left and rotation to the right, and it has discontinuities at +90° and -90°.
In general: Clean up your code, drop duplicates, bring calculations in a meaningful order, choose meaningful names, choose similarly looking name schemes for semantically equal variables. This makes things easier to read for you and anyone you show the code, and it makes debugging easier. Not doing so is part of the problem, really! Mistakes have much less chance to hide in a clearly structured code.
Then read about the cross-product as a possibility to distinguish whether to rotate to the left or to the right: Use the sign of the cross-product to mirror the angle.
Posted by haegarr on 20 January 2014 - 12:42 PM
Problem with the OP is that after writing a 800 pages book as answer there will still be some blanks. The more general the question the more differing answers you'll get. There is no right way, especially if you don't specify some goals / restrictions.
For a (IMHO) good general rendering solution look for the threads dealing with Frostbyte Rendering Architecture, rendering task/job queueing, and the therein linked threads.
For a (IMHO) good approach for input processing see the threads in the past half year where L. Spiro has written his answers. Short delay, no input misses, and input translation to abstract actions are the keywords in input processing.
For a (IMHO) more or less complete approach at animation look e.g. at tutorial about Unity's Macanim (especially on YouTube, because they give a good overview). Animation blending, animation layering, and multi-axis control are keywords here. Skinning is mostly a standard process.
The decision of what animation is played depends on the character controller (in case of a PC) or AI (in case of an NPC), which is a beast by its own. Decision trees are one possibility. There is also the AIGameDev.com site especially for AI.
For a (IMHO) good approach of game world organization look at component based entity systems (CES) with sub-systems for the logic. IMHO you should avoid scene graph approaches.
For a good explanation of what a game loop may look like try e.g. the book "Game Engine Architecture" by Jason Gregory; a belonging excerpt of it an be found at Gamasutra.
Most if not all of the suggestions above is for sure overkill for a newcomer, I know. The parts suggested above together make a fine, modern engine. For now they may be worth to be studied to get an overview of why people tend to try more complex solutions than necessary on the first look. But start simple. And please break down your future questions so that threads do not cover such a broad spectrum. It helps in giving and hence getting better quality answers.
Posted by haegarr on 17 January 2014 - 03:35 PM
This line in the fragment shader
frag = ocol * texture( tex, otex );
need to be
frag = ocol * texture2D( tex, otex );
The driver should have told you an error like "function texture not declared" or so. Please check the state of compiled scripts before linking.