Jump to content

  • Log In with Google      Sign In   
  • Create Account

haegarr

Member Since 10 Oct 2005
Offline Last Active Today, 12:44 AM

#5251484 orbit camera math

Posted by haegarr on 10 September 2015 - 02:33 AM


this works fine , but it have a max of 0.6 for x and 0.4 for the y i would the max to 2pi for the x and pi for y.

Please look into the other thread.

 


my question is generic, but how i can convert a value for ex from -10 ,15 to 0-360?

Assuming you want to map this linearly, you need to do

1. subtract the lower limit, here -10, so that the new lower limit is at 0

     ( -10 .. 15 ) - (-10) => 0 .. 25 

2. normalize the range by dividing by the difference of lower and upper limits, here 15-(-10)=25, so that

     ( 0 .. 25 ) / 25 => 0 .. 1

3. multiply by the desired range, here 360-0=360, so that

     ( 0 .. 1 ) * 360 => 0 .. 360

4. add the desired lower limit, here 0, so that

     ( 0 .. 360 ) + 0 => 0.. 360




#5251478 orbit camera

Posted by haegarr on 10 September 2015 - 01:31 AM


1)the pitch has a limited range : when i move the mouse up and down ,the mesh is rotated of 10/20 degree around the x axis

Well, one mistake is made by me in post #10. The value s must be half of what I've written, hence

    float s = glm::min<float>(m_width, m_height) * 0.5f;

Sorry for that.

 


2)moving the mouse from left to right i have a pitch variation and i doesn't understand why , it must be change only in the top to bottom or bottom to top?  phi and theta are related to x and y of the mouse i don't understand where i wrong

This kind of solution does not work as you expect. Because it uses the atan2(y,x) function, phi is an angle from the positive horizontal axis x in CCW direction (CW, depends on your co-ordinate system) around the screen center. If you (would be able to) drive the mouse in a perfect circle around the center, you get a smoothly varying phi and a constant theta. On the other hand, if you drive the mouse in a straight line from the center to the outside, you get a constant phi and a smoothly varying theta. Well, at least you should get that due to the chosen model of camera rotation.

 


the phi the corners is always 55 and the theta:
...

I asked for (xp,yp) and not phi for a specific reason: If the co-ordinates are already wrong, then calculations based on those co-ordinates give nonsense in a probably not retraceable way.

 

On a 800 x 600 screen / window, and considering the correction I mentioned above, the variable s should obviously be determined to be 600. At the left edge mouse x would be 0, and hence

    xp = (0 - 800 / 2) / 300 = -400 / 300 = -1,333

and at the right edge

    xp = (799 - 800 / 2) / 300 = +1.33

Similarly at the top edge and bottom edges
    yp = (0 - 600 / 2) / 300 = -1
    yp = (599 - 600 / 2) / 300 = +0,997
 

Can you confirm this? Because here ...


float xp = ((m_deltax - m_width / 2) / s);
float yp = ((m_deltay - m_height / 2) / s);

you seem to deal with delta values of ouse motion. That would not be correct. You need to use absolute, i.e. mouse position values for this kind of solution.

 


but what that i not understand is: the semisphere is not a radius 1 semisphere?
and why the at the corners go from 0.68(xp) and 0.43(yp) [...]

Yep, the normalization by s should made it a unit-hemisphere. But because of the mistake a hemisphere with radius 0.5 was computed so far.

 

BTW: A yp of 0.43 is wrong even when considering the wrong s. If you ran that stuff in a window with borders, you need to use the inner window size instead of the screen size. Do you do so?

 


[...] and not from 0 to 1?

The value range should be [-1,+1) in vertical and [-a,+a) in horizontal direction, where a is the aspect ratio.



#5250668 Does anyone know which OpenGL state did I screw up?

Posted by haegarr on 05 September 2015 - 02:07 AM

As a rule of thumb: IMO a rendering sub-system should not (with one exception, see below) rely on a state. Each rendering job should send a full set-up description, including any related parameters that can be changed by it at all. (I.e. in the case of models: VB/IB set-up, material belonging things, blending, primitive mode, shading, and so on.) Then the lowest layer just above OpenGL can be used to compare the requested set-up against an internal image of OpenGL's set-up, and differences then yield in OpenGL calls and, of course, changes to the internal image. This method is cheap enough and avoids confusion just like those in the OP, and it is useful for decoupling purposes.




#5250260 orbit camera

Posted by haegarr on 02 September 2015 - 08:16 AM

I had some problem to decipher your post (no offending), so bear with me if I misunderstood what you meant ...

 

1)the quaternion need only one angle for create a angular displacement , is correct? why now two angles?for the two quaternions that must be interpolated?

A quaternion, in fact a unit-quaternion, is a kind of representation for rotations. As such it encodes an axis of rotation and an angle of rotation (and it has a constraint that its 2-norm is 1, else it would be a unit-quaternion and shearing would appear).

 

Interpolation means to calculate an in-between, having 2 supporting points (or key values) at the limits. Whether these 2 supporting points are spatially or temporally or whatever related plays no role for the interpolation. What "2 quaternions" do you want to interpolate? The control schemes described above do by themselves not have an urge to use quaternions. If you speak of a smooth transition of the current orientation to the next, then the one support point is the last recently used quaternion and the other is the newly determined (from mouse position / movement) one.

 

2)i see the squad and there is two quaternion and a variable t time? then i must get the time for each step ? and how i can convert the t to [0-1]

The 2 quaternions are the said support points, and the free variable (you used t, I will use k below) denotes where the in-between is located between the support points. You can compute an in-between only when you provide a value for k, yes. (But, as said, t need not be a time value.) How to determine a suitable k depends on what you want to achieve. For example, if you want N interpolation steps that are equally distributed within the allowed range [0,1], then you would use

    kn := n / N   with   n = 0, 1, 2, …, N

where kn is the value for k at step n. Notice that n increments by 1 from 0 up to N, inclusively; this would be implemented as counting loop, of course. So you get

    k0 = 0 / N = 0

    kN = N / N = 1

as is required for the interpolation factor by definition.

 

If, on the other hand, you want the interpolation run over a duration T and started at moment in time t0 (measured by a continuously running clock), now at a measured moment t, then

    k( t ) := ( t - t0 ) / T   with   t0 <= t <= t0+T

so that, as required by the interpolation factor definition,

    k( t0 ) = ( t0 - t0 ) / T = 0

    k( t0 + T ) = ( t0 + T - t0 ) / T = 1

 

As you can see in both examples above, the allowed range [0,1] is achieved by normalizing (division by N or T) and, in the case of the timed interpolation, by first shifting the real interval (subtraction of t0) so that it originates at 0; the latter part was not necessary in the first example because it already originates at 0.

 

and how i can transform the position of the mouse to the hypersfere? i must project? how? [...]

Well, a hemisphere (half of a full sphere) is luckily not a hypersphere (a sphere in more than 3 dimensions)!

 

Let's say the mouse position is the tuple (mx,my) and the screen size is given by (w,h) in the same co-ordinate system as (mx,my). Then the relative mouse position is

   s := min( w, h ) * 0.5    << EDIT: must be halved to yield in a proper [-1,+1] normalization, hence the 0.5

   x' := ( mx - w / 2 ) / s

   y' := ( my - h / 2 ) / s

 
The position is within a circle as described in a previous post only if
   x'2 + y'2 <= 1
otherwise the mouse is out of the range of our gizmo! If inside, then the tuple (x',y') denote a normalized position within the projected circle.
 
A point (x,y,z) on a hemisphere is described by spherical co-ordinates by 
   x := r * sin( theta ) * cos( phi )

   y := r * sin( theta ) * sin( phi )

   z := r * cos( theta )

Due to normalization we can ignore the radius because it is 1.

 

If we divide y by x we achieve

   y / x = sin( phi ) / cos( phi ) = tan( phi )

and hence we can compute phi' for our relative mouse position (x',y') using the famous atan2 function as

   phi' = atan2( y', x' )

 

For theta or z, resp., we have 2 ways. One of them is derived from the fact that each point on the unit sphere is 1 length unit away from its center. That means for use

   x'2 + y'2 + z'2 == 1

so that for our z', considering that we use the "upper" hemisphere, have

   z' = +sqrt( 1 - x'2 - y'2 )

This is valid due to our above formulated condition that the mouse position is within the circle.

 

Hence we can calculate

   theta' = acos( z' )

 

Now we have 2 angles, phi' and theta'. What is left over is how to map that onto yaw and pitch, a question you need to answer.




#5249867 entity component system object creation

Posted by haegarr on 31 August 2015 - 06:42 AM

You have a factory method in your runtime that delivers a new instance of the requested kind. The factory method knows a recipe for every kind that can be requested. The recipe may be

 

a) a hardcoded routine; this has the drawback of not being as flexible as a data driven approach, and hence causes maintenance problems in the long run; it is, however, quickly implemented;

 

b) a prototype, i.e. a completely assembled instance, that is deeply copied and perhaps partly re-parametrized by the factory; this variant is what Juliean suggests if I understood it correctly;

 

c) a prescription of how to instantiate and assemble a new entity; the prescription is processed (e.g. interpreted) when needed;

 

You can use combination of them. For example, a) or c) can be used to generate the prototype for b). Moreover, both the prototype as well as the prescription can be read from mass storage.

 

d) In the former case of the prototype we speak of de-serialization. It requires that the instance is build and serialized once, and can be deserialized then as often as needed (once per application start in our use case). As such the representation on mass storage is close to the representation in memory, so that loading it is relatively fast and re-interpretation of what is read is reduced to a minimum.

 

e) in the case of a prescription loading is a breeze, because you load just data that is, however, then later to be interpreted by the factory nonetheless. You can use a binary format or a text format for the file representation. The text format, together with a "human readable format" specification", may have the advantage that you can use any text editor to define the prescription at your will. XML and JSON (and similar formats) are often used to do so. However, XML is somewhat bloated but provides additional stuff like their attributes.




#5249527 Checkboard a sphere without texture?

Posted by haegarr on 29 August 2015 - 04:05 AM

It is not exactly possible (as it is also not by using a texture), but of course you can parametrize the surface of a sphere and compute the parameters of the point where the ray intersects, transform the parametrization into one suitable for coloring, and finally use the associated color.

 

Example:

 

1. Compute the intersection point in object local space using cartesian co-ordinates as usual.

 

2. Transform the cartesian co-ordinates into spherical co-ordinates.

 

3. Drop the radial co-ordinate and map the remaining by modulo calculations.

 

4. Pick a color due to the 2 modulo values.




#5249501 2D OpenGL lightning using shaders

Posted by haegarr on 29 August 2015 - 02:50 AM

The code snippets you provided so far are not sufficient for an analysis. So I'll describe what to do, now with more details (the following is the most basic way; it can be fleshed out, of course):

 

1. You have a mesh, obviously shaped as a quad. Each vertex has the mandatory position and a texture co-ordinate.

 

2. You have a texture that stores the color as it looks like when being fully lit. This is because you can darken a color easily, but brighten it would introduce inaccuracies and is not possible at all if being black. Set this texture for sampling in the fragment shader.

 

3. You have a scene constant ambient light intensity given as RGB value. Set this value as uniform to the fragment shader.

 

4. You have a spot light with an intensity given as RGB, a position given in screen co-ordinates, and an radial extent given in screen co-ordinates. Set these values as uniforms to the fragment shader.

 

5. In the fragment shader, use the UV co-ordinates to sample the color texture.

 

6. Calculate the distance from the current fragment to the spot light position.

 

7. Attenuate the intensity triple of the spot light accordingly to the distance calculate in 6.

 

8. Add the ambient intensity triple to the result of 7.

 

9. Clamp the result of 8. to (1,1,1).

 

10. Multiply the result of 9. by the sampled texture color.

 

11. Write the result of 10., extended by the homogeneous 1, as fragment color. Do not use the build-in blending engine here.




#5248983 Finding Up and Right vectors from Look for view matrix?

Posted by haegarr on 26 August 2015 - 07:42 AM


What if I want to be able to set it upwards?

Then you have an infinite amount of possibilities without knowing which one to use (from the targeted direction alone). So either you choose a heading heuristically or have historical information (about previous heading) how the upward direction was reached. By the way, the same is true for looking straight downwards.




#5248942 My scene management failed

Posted by haegarr on 26 August 2015 - 04:01 AM

((EDIT: Damned editor is eating large portions of my post when I embed citations. So here we go without…))
 
You can ask ;) but any and all answers you get will not unburden you to make your own decisions and learn from what goes well and what goes wrong. There is no single right way. This is because a game is inherently complex. FWIW, I usually follow these guidelines:
1. Solve problems the top-down way; consider how the problem is embedded in the entirety; divide a problem recursively until getting eatable pieces.
2. A unit of software should have 1 concern or should be responsible for 1 thing (how blurry that concern ever is ;) ); if it had more than 1, then bullet point 1 isn't went along far enough.
3. Bullet points 1 and 2 leads naturally to the use of interacting sub-systems; build this as hierarchy; higher level system use lower level systems and work together with same level systems; lower level systems should not use higher level systems; a system should not use another directly that is more than 1 level below.
4. Use the data-driven approach where appropriate.
5. When OOP-ing, do use inheritance where appropriate but prefer composition over inheritance.
 
Examples:
 
The game loop rules the coarse order in which things happen. You've seen in the "Game Engine Architecture" book (that of Jason Gregory, right?) that a specific order of animation steps on the level of the game loop helps to solve problems of dependencies. This is also true for other sub-systems. Blindly removing a game object without the
knowledge whether another sub-system is still working with it would be disastrous. Also not good would be to instantiate a new game object at a point in time where AI, animation, or collision detection already have been done. Hence having a defined point in the game loop, e.g. just after input gathering, where all game object addition and removal w.r.t. the scene happens, means that game objects do neither pop up nor disappear at the wrong moment. (So we have considered the environment in which spawning and removal happens, and have seen that it is beneficial to make it synchronously.) Well, such deferred addition and removal requires that (a) we use a kind of job object and (b) we have a sub-system where the jobs can be send to. Here the scene management comes into play. Because the concern of the scene management is to manage all game objects that live in the scene, it is the sub-system that can process said jobs. Furthermore, it shows that the scene management has an own point of time for updating within the game loop.
 
Now that we have introduced the scene management, should we put the scene graph into it? The scene graph has the purpose of propagating properties. This is a different concern than the existence of game objects, so no, it should not be part of the scene management as defined above. A scene graph is another structure used for another purpose. Similarly, say, an octree used for collision detection is an own structure, as well as a render job queue is, and so on. What structure does a scene manager then needs? It depends on the API. Until now we have said that game objects should be added and removed. We can use a handle concept and IDs for naming game objects, then 2 arrays would be sufficient to hold the indirections and the game objects themselves. This would also be sufficient to serve game object retrieval requests.
 
Resource management is another good example, because it occurs so often. What is the concern of resource management? The supply of resources for the game. Fact is that resource live persistently on a mass storage, what means that we are working with 2 copies of resources (the other one in RAM). Well, 2 copies are enough; we don't want more. The resource management should hide all these details. So, since we have used a bit fuzzy description of the concern of resource management, we identified 2 tasks belonging to it: resource caching and resource loading. Since these are 2 lower level concerns of resource management, we should implement resource management separated as a front-end that defines the API for clients, and a back-end by 2 delegate objects, one for caching and one for loading. The front-end manager then uses the back-end objects to fulfill the API. This can be done further down, e.g. the loader may use a file format wrapper to do the actual loading).
 
 
Well, all the above is somewhat general; it gives no explicit answers to your questions. Feel free to ask more questions, but remember that specific answers need specific questions. smile.png



#5248719 My scene management failed

Posted by haegarr on 25 August 2015 - 03:19 AM


#1. I do have a cache of Mesh objects in SceneManager (see "MeshCache _meshCache;"). The scene nodes don't store MeshData objects, but point to them.

Meshes (i.e. the shared part) are resources. Caching them is a task of resource management. Scene management, on the other hand, is responsible for all the entities that are currently in the scene. That are 2 distinct things.

 


#2. I basically have a SceneGraph object stored in SceneManager so that the user is able to get the pointer to that SceneGraph objects via sceneManager->getSceneGraphPtr(). Is that still wrong?

If I remember L. Spiro's usage of terms correctly, then the scene management deals with the existence of entities in the scene, while a scene graph propagates properties. That again are distinct concerns, and in this sense having a scene manager handling a scene graph would be wrong.

 


How do you handle animated models in your engine? [...]

That's the way I'm handling this (L. Spiro does it probably in another way)...

 

When a game object becomes part of the scene at last step during instantiation, it is represented by a couple of objects. The objects store their own necessary parameters (i.e. those that are unique for the instance) and usually also refer to commonly used resources. It is allowed for clients to overwrite references. Other clients are not interested in how the object is build as long as it provides the parameters the client is interested in.

 

An animation clip is a resource; it can be used by more than a single game objects. To be actually used, a game object needs an animation runtime object (similar to the MeshInstance mentioned above). The runtime object stores the current state of animation of that particular game object, while it refers to one or animation clips to have access to the common animation definition data. The clue now is that when the animation sub-system is running during the progress of the game loop, it will alter parameters of some runtime objects (besides animation runtime objects). This may be a 3D skeleton pose, a sprite attachment, or whatever. Notice that a skeleton also has a runtime object besides a defining resource. 

 

After running the animation sub-system, all animated game objects are again still for the remaining time until the game loop wraps around. A subsequent (CPU based) skinning process computes a new mesh. For the rendering sub-system there is no difference between animated and non-animated game objects, because the rendering just looks at belonging parameters and finds a mesh, a sprite, or whatever.

 


[…] What kind of files do you have? I guess I can create my own file formats for animated meshes (barbarian.mesh, barbarian.skel, barbarian.animdata) but is it really needed?

The file representation is detached from the in-memory representation, because the requirements are different. Okay, the file has to store data which later occur as resources. But whether they are stored in individual files or archive files, whether they are compressed or not, whether they grouped into load bundles, … is a question of the resource loading sub-system which itself is a lower level part of the resource management system.

 

It is not necessary to create an own file format as long as you are well-pleased with an existing one. As soon as you want to gain some loading performance by using in-place loading, support load bundles, support streaming, be interested in a unified resource loader, obfuscating your resources, … you probably need to define your own format, or look out for useable file formats specifically made for game content.




#5248531 Programming scientific GUI's, data and gui layout?

Posted by haegarr on 24 August 2015 - 08:17 AM

Well, software patterns are somewhat generic by definition; otherwise they would be available as library. Besides that, architectural patterns like MVC, MVP, MVVM, and the more advanced ones are actually what to look for desktop application, including scientific ones. Those patterns are about the separation of business data, their representation, and their manipulation. I suggest you to look for comparisons, because such comparisons should hint especially at typical use cases. Nevertheless, don't forget that patterns are just guidelines; don't hesitate to diverge when appropriate.

 

Totally unrelated from the GUI architecture is the question about the business data management. You should avoid to store original and derived data into the same object. Treat it like variables in a programming language: You have a variable with the original data, you apply an operator, and yield in a result that is stored in another variable. This is fine because you don't know how which operator will be applied, how often an operation will be applied, or to which data they will be applied. So you need to provide most flexibility to that storage system. May be an operator is allowed to overwrite its source (see below); but the general case of writing to a new variable should ever be available, and it must be available if the format of the output is different anyway.

 

Regarding the operators themselves … it depends. Do you need a history of applied operations? Need an undo be supported? Do you need macros / operation recording? Should the operations be re-applied if input data changes? Do you need a type system to distinguish data types?




#5248503 2D OpenGL lightning using shaders

Posted by haegarr on 24 August 2015 - 06:22 AM


Should it be like that ? I've tried so many modifications to my shaders and played with them but could not get what i want..

The texture and alpha should be as when the object is fully lit. It is the light that makes a scene bright or dark, not the scenery.

 

Then in the shader compute / sample a light value (some gray, usually, where black means unlit and white means full lit) and multiply (component by component) that light value with the texture color value. Regardless of the texture color value, when multiplied with the extreme 0,0,0,1 (no light) will yield in black, and multiply with extreme 1,1,1,1 (full light) will yield in the texture's color; anything in-between will yield in shades of the texture color.




#5248034 RPG, Engines and Frustration

Posted by haegarr on 21 August 2015 - 06:57 AM


So, where should I start ? Can anybody give me a path to follow that the final destination is an RPG like The Legend of Zelda: A Link to the Past ?
You said something about Tetris. How can I start with tetris? Should I use Unity2D for that? I have no idea.

As L. Spiro stated, don't start with the desired project! Perhaps even don't start with a real project at all. If you would start unprepared right into the desired project you'll get frustrated very quick due to all the unknown nitty-gritty that need to be handled, and that will definitely jeopardize the project. 

 

Since your goal is to finish a game, using an existing not just engine but tool like Unity (or Unreal, …) is IMHO the way to go. There are plenty of (video) tutorial for Unity and Unreal. Do not just look at them but get own experience by reenacting them (do not restrict yourself on tutorials related to RPG stuff here). So you get a feeling for the tool and how things are expected to work within. After doing so for some time, start to bring in own ideas / variations. Then start an own small game project. And only after that has been finished (need not be polished but, well, playable), plan out your desired project with your then existing experience and finally go for it.

 

Just my 2 cents, you know :)




#5247392 Quaternions for FPS Camera?

Posted by haegarr on 18 August 2015 - 08:37 AM


Right now I'm just playing with the camera. I've added it to the scene at 0, 0, 0. I've gotten it to rotate using camera.rotation.x/y. From what I've seen, other people seem to implement a vector that the camera "looks at" (using the .lookAt function). [...]

The look-at function is useful to align the camera once or tracking an object. It is just one possibility to control the camera.

 

IMO you should understand it so: The camera is an object in the world similar to a game object. It has a placement (position and orientation) and additionally field of view and other view related stuff. The camera by itself does not change its placement. Then you can apply the functionality of objects like LookingAt or Tracking or Parenting or … to control the placement in part or totally. That way gives you maximum flexibility.

 

 


[…] Really I just want to create a camera that allows me to look around in the scene. I put a cube at 0, 0, 5 and just want to be able to move the camera around and look at the cube from different angles. For the development I'm using Threejs (threejs.org).

Well, that sounds not like a FPS camera but a free camera perhaps with a tracking constraint control.
 
As said, I'd implement this as a camera object with a placement. The placement should be able to provide a matrix that stores the "local to global" spatial transform. The placement should provide an API for setting and altering position and orientation separately. Then I'd implement a camera control that processes input, generates movement from it, and applies it to the attached Placement (which, of course, belongs to the camera in this case).
 
I'd further implement a control Tracking that is to be parametrized with (a) a Placement that is to be tracked (its position, to be precise) and (b) a Placement that is the target to be altered (its orientation, to be precise). The math is so that the difference vector from the Placement.position of the target to the Placement.position of the tracked placement is used (after normalization) as forward vector of the typical look-at functionality. What need to be done then is that the control is invoked every time the Placement.position of the target object has been settled after being altered.



#5247371 How to revert the scale of a matrix?

Posted by haegarr on 18 August 2015 - 06:28 AM

It depends. In general you cannot reconstruct the history how the one available matrix was generated from the matrix alone. You can just decompose the matrix into an equivalent translational and scaling transform (letting rotation aside as mentioned in the OP), replacing the transform of interest with its desired substitute, and re-compose, so that the translational part is not effected. But if the composition was done so that the position was effected by a scaling (as e.g. in S1 * T * S2), then you cannot eliminate scaling totally (AFAIK).

 

So in your case decomposition is relatively easy, because in a homogeneous 3x3 matrix without rotation there is an embedded 2x2 matrix that is effected by scaling only but not by translation. You get this sub-matrix if you strip the row and the column where in the 3x3 matrix the homogenous "1" is located. The resulting sub-matrix must be a diagonal matrix, e.g. only the values at [0][0] and [1][1] differ from zero. Those both values are in fact the scaling factors along x and y axis directions, resp. Hence setting both these values to 1 will do the trick.






PARTNERS