Seconding all what Hodgman listed, except that I would rephrase his "perhaps you need to iterate through your animation data as the outer loop first ..." as "you should iterate through your animation data as the outer loop first ...". Any sub-system should manage their data themselves. A skeleton animation sub-system, for example, should hold a list of all skeletons that need to be updated. A skeleton gets registered / unregistered with the sub-system when it is instantiated into / removed from the scene. This unburdens the sub-system from iterating the scene tree and touching all those unneeded nodes.
haegarrMember Since 10 Oct 2005
Offline Last Active Today, 12:39 PM
Although being a professional programmer for years, game development is just one of my hobbies (nearly as long as programming yet, and that is a loooong time).
- Group Crossbones+
- Active Posts 3,970
- Profile Views 17,106
- Submitted Links 0
- Member Title Member
- Age Age Unknown
- Birthday Birthday Unknown
Posted by haegarr on 17 September 2015 - 12:57 PM
There are (at least) 3 undo principles:
a) Inverting the effect of the last action;
b) restoring the memorized state that was valid before the last action (as you do ATM);
c) replaying the history of actions exclusive the last one.
No one of them is per se suitable for pixel painting programs:
a) is not possible because information may be lost due to the former application of the action;
b) costs masses of memory (and bandwidth in your case);
c) costs much performance if the painting has progressed too far.
A way that is suggested now and then is to combine the above possibilities with the goal to lower the average costs. For example, a memento is made only after N actions, and for then at most N-2 remaining actions a replay is done. The drawing area can be tiled for the purpose of storing a memento, so that just the tiles effected during the last N-1 actions need to be memorized. Older mementos can be externalized by a background job.
Posted by haegarr on 10 September 2015 - 02:33 AM
this works fine , but it have a max of 0.6 for x and 0.4 for the y i would the max to 2pi for the x and pi for y.
Please look into the other thread.
my question is generic, but how i can convert a value for ex from -10 ,15 to 0-360?
Assuming you want to map this linearly, you need to do
1. subtract the lower limit, here -10, so that the new lower limit is at 0
( -10 .. 15 ) - (-10) => 0 .. 25
2. normalize the range by dividing by the difference of lower and upper limits, here 15-(-10)=25, so that
( 0 .. 25 ) / 25 => 0 .. 1
3. multiply by the desired range, here 360-0=360, so that
( 0 .. 1 ) * 360 => 0 .. 360
4. add the desired lower limit, here 0, so that
( 0 .. 360 ) + 0 => 0.. 360
Posted by haegarr on 10 September 2015 - 01:31 AM
1)the pitch has a limited range : when i move the mouse up and down ,the mesh is rotated of 10/20 degree around the x axis
Well, one mistake is made by me in post #10. The value s must be half of what I've written, hence
Sorry for that.
2)moving the mouse from left to right i have a pitch variation and i doesn't understand why , it must be change only in the top to bottom or bottom to top? phi and theta are related to x and y of the mouse i don't understand where i wrong
This kind of solution does not work as you expect. Because it uses the atan2(y,x) function, phi is an angle from the positive horizontal axis x in CCW direction (CW, depends on your co-ordinate system) around the screen center. If you (would be able to) drive the mouse in a perfect circle around the center, you get a smoothly varying phi and a constant theta. On the other hand, if you drive the mouse in a straight line from the center to the outside, you get a constant phi and a smoothly varying theta. Well, at least you should get that due to the chosen model of camera rotation.
the phi the corners is always 55 and the theta:
I asked for (xp,yp) and not phi for a specific reason: If the co-ordinates are already wrong, then calculations based on those co-ordinates give nonsense in a probably not retraceable way.
On a 800 x 600 screen / window, and considering the correction I mentioned above, the variable s should obviously be determined to be 600. At the left edge mouse x would be 0, and hence
xp = (0 - 800 / 2) / 300 = -400 / 300 = -1,333
and at the right edge
xp = (799 - 800 / 2) / 300 = +1.33
Can you confirm this? Because here ...
float xp = ((m_deltax - m_width / 2) / s); float yp = ((m_deltay - m_height / 2) / s);
you seem to deal with delta values of ouse motion. That would not be correct. You need to use absolute, i.e. mouse position values for this kind of solution.
but what that i not understand is: the semisphere is not a radius 1 semisphere?and why the at the corners go from 0.68(xp) and 0.43(yp) [...]
Yep, the normalization by s should made it a unit-hemisphere. But because of the mistake a hemisphere with radius 0.5 was computed so far.
BTW: A yp of 0.43 is wrong even when considering the wrong s. If you ran that stuff in a window with borders, you need to use the inner window size instead of the screen size. Do you do so?
[...] and not from 0 to 1?
Posted by haegarr on 05 September 2015 - 02:07 AM
As a rule of thumb: IMO a rendering sub-system should not (with one exception, see below) rely on a state. Each rendering job should send a full set-up description, including any related parameters that can be changed by it at all. (I.e. in the case of models: VB/IB set-up, material belonging things, blending, primitive mode, shading, and so on.) Then the lowest layer just above OpenGL can be used to compare the requested set-up against an internal image of OpenGL's set-up, and differences then yield in OpenGL calls and, of course, changes to the internal image. This method is cheap enough and avoids confusion just like those in the OP, and it is useful for decoupling purposes.
Posted by haegarr on 02 September 2015 - 08:16 AM
I had some problem to decipher your post (no offending), so bear with me if I misunderstood what you meant ...
1)the quaternion need only one angle for create a angular displacement , is correct? why now two angles?for the two quaternions that must be interpolated?
A quaternion, in fact a unit-quaternion, is a kind of representation for rotations. As such it encodes an axis of rotation and an angle of rotation (and it has a constraint that its 2-norm is 1, else it would be a unit-quaternion and shearing would appear).
Interpolation means to calculate an in-between, having 2 supporting points (or key values) at the limits. Whether these 2 supporting points are spatially or temporally or whatever related plays no role for the interpolation. What "2 quaternions" do you want to interpolate? The control schemes described above do by themselves not have an urge to use quaternions. If you speak of a smooth transition of the current orientation to the next, then the one support point is the last recently used quaternion and the other is the newly determined (from mouse position / movement) one.
2)i see the squad and there is two quaternion and a variable t time? then i must get the time for each step ? and how i can convert the t to [0-1]
The 2 quaternions are the said support points, and the free variable (you used t, I will use k below) denotes where the in-between is located between the support points. You can compute an in-between only when you provide a value for k, yes. (But, as said, t need not be a time value.) How to determine a suitable k depends on what you want to achieve. For example, if you want N interpolation steps that are equally distributed within the allowed range [0,1], then you would use
kn := n / N with n = 0, 1, 2, …, N
where kn is the value for k at step n. Notice that n increments by 1 from 0 up to N, inclusively; this would be implemented as counting loop, of course. So you get
k0 = 0 / N = 0
kN = N / N = 1
as is required for the interpolation factor by definition.
If, on the other hand, you want the interpolation run over a duration T and started at moment in time t0 (measured by a continuously running clock), now at a measured moment t, then
k( t ) := ( t - t0 ) / T with t0 <= t <= t0+T
so that, as required by the interpolation factor definition,
k( t0 ) = ( t0 - t0 ) / T = 0
k( t0 + T ) = ( t0 + T - t0 ) / T = 1
As you can see in both examples above, the allowed range [0,1] is achieved by normalizing (division by N or T) and, in the case of the timed interpolation, by first shifting the real interval (subtraction of t0) so that it originates at 0; the latter part was not necessary in the first example because it already originates at 0.
and how i can transform the position of the mouse to the hypersfere? i must project? how? [...]
Well, a hemisphere (half of a full sphere) is luckily not a hypersphere (a sphere in more than 3 dimensions)!
Let's say the mouse position is the tuple (mx,my) and the screen size is given by (w,h) in the same co-ordinate system as (mx,my). Then the relative mouse position is
s := min( w, h ) * 0.5 << EDIT: must be halved to yield in a proper [-1,+1] normalization, hence the 0.5
x' := ( mx - w / 2 ) / s
y' := ( my - h / 2 ) / s
y := r * sin( theta ) * sin( phi )
Due to normalization we can ignore the radius because it is 1.
If we divide y by x we achieve
y / x = sin( phi ) / cos( phi ) = tan( phi )
and hence we can compute phi' for our relative mouse position (x',y') using the famous atan2 function as
phi' = atan2( y', x' )
For theta or z, resp., we have 2 ways. One of them is derived from the fact that each point on the unit sphere is 1 length unit away from its center. That means for use
x'2 + y'2 + z'2 == 1
so that for our z', considering that we use the "upper" hemisphere, have
z' = +sqrt( 1 - x'2 - y'2 )
This is valid due to our above formulated condition that the mouse position is within the circle.
Hence we can calculate
theta' = acos( z' )
Now we have 2 angles, phi' and theta'. What is left over is how to map that onto yaw and pitch, a question you need to answer.
Posted by haegarr on 31 August 2015 - 06:42 AM
You have a factory method in your runtime that delivers a new instance of the requested kind. The factory method knows a recipe for every kind that can be requested. The recipe may be
a) a hardcoded routine; this has the drawback of not being as flexible as a data driven approach, and hence causes maintenance problems in the long run; it is, however, quickly implemented;
b) a prototype, i.e. a completely assembled instance, that is deeply copied and perhaps partly re-parametrized by the factory; this variant is what Juliean suggests if I understood it correctly;
c) a prescription of how to instantiate and assemble a new entity; the prescription is processed (e.g. interpreted) when needed;
You can use combination of them. For example, a) or c) can be used to generate the prototype for b). Moreover, both the prototype as well as the prescription can be read from mass storage.
d) In the former case of the prototype we speak of de-serialization. It requires that the instance is build and serialized once, and can be deserialized then as often as needed (once per application start in our use case). As such the representation on mass storage is close to the representation in memory, so that loading it is relatively fast and re-interpretation of what is read is reduced to a minimum.
e) in the case of a prescription loading is a breeze, because you load just data that is, however, then later to be interpreted by the factory nonetheless. You can use a binary format or a text format for the file representation. The text format, together with a "human readable format" specification", may have the advantage that you can use any text editor to define the prescription at your will. XML and JSON (and similar formats) are often used to do so. However, XML is somewhat bloated but provides additional stuff like their attributes.
Posted by haegarr on 29 August 2015 - 04:05 AM
It is not exactly possible (as it is also not by using a texture), but of course you can parametrize the surface of a sphere and compute the parameters of the point where the ray intersects, transform the parametrization into one suitable for coloring, and finally use the associated color.
1. Compute the intersection point in object local space using cartesian co-ordinates as usual.
2. Transform the cartesian co-ordinates into spherical co-ordinates.
3. Drop the radial co-ordinate and map the remaining by modulo calculations.
4. Pick a color due to the 2 modulo values.
Posted by haegarr on 29 August 2015 - 02:50 AM
The code snippets you provided so far are not sufficient for an analysis. So I'll describe what to do, now with more details (the following is the most basic way; it can be fleshed out, of course):
1. You have a mesh, obviously shaped as a quad. Each vertex has the mandatory position and a texture co-ordinate.
2. You have a texture that stores the color as it looks like when being fully lit. This is because you can darken a color easily, but brighten it would introduce inaccuracies and is not possible at all if being black. Set this texture for sampling in the fragment shader.
3. You have a scene constant ambient light intensity given as RGB value. Set this value as uniform to the fragment shader.
4. You have a spot light with an intensity given as RGB, a position given in screen co-ordinates, and an radial extent given in screen co-ordinates. Set these values as uniforms to the fragment shader.
5. In the fragment shader, use the UV co-ordinates to sample the color texture.
6. Calculate the distance from the current fragment to the spot light position.
7. Attenuate the intensity triple of the spot light accordingly to the distance calculate in 6.
8. Add the ambient intensity triple to the result of 7.
9. Clamp the result of 8. to (1,1,1).
10. Multiply the result of 9. by the sampled texture color.
11. Write the result of 10., extended by the homogeneous 1, as fragment color. Do not use the build-in blending engine here.
Posted by haegarr on 26 August 2015 - 07:42 AM
What if I want to be able to set it upwards?
Then you have an infinite amount of possibilities without knowing which one to use (from the targeted direction alone). So either you choose a heading heuristically or have historical information (about previous heading) how the upward direction was reached. By the way, the same is true for looking straight downwards.
Posted by haegarr on 26 August 2015 - 04:01 AM
Posted by haegarr on 25 August 2015 - 03:19 AM
#1. I do have a cache of Mesh objects in SceneManager (see "MeshCache _meshCache;"). The scene nodes don't store MeshData objects, but point to them.
Meshes (i.e. the shared part) are resources. Caching them is a task of resource management. Scene management, on the other hand, is responsible for all the entities that are currently in the scene. That are 2 distinct things.
#2. I basically have a SceneGraph object stored in SceneManager so that the user is able to get the pointer to that SceneGraph objects via sceneManager->getSceneGraphPtr(). Is that still wrong?
If I remember L. Spiro's usage of terms correctly, then the scene management deals with the existence of entities in the scene, while a scene graph propagates properties. That again are distinct concerns, and in this sense having a scene manager handling a scene graph would be wrong.
How do you handle animated models in your engine? [...]
That's the way I'm handling this (L. Spiro does it probably in another way)...
When a game object becomes part of the scene at last step during instantiation, it is represented by a couple of objects. The objects store their own necessary parameters (i.e. those that are unique for the instance) and usually also refer to commonly used resources. It is allowed for clients to overwrite references. Other clients are not interested in how the object is build as long as it provides the parameters the client is interested in.
An animation clip is a resource; it can be used by more than a single game objects. To be actually used, a game object needs an animation runtime object (similar to the MeshInstance mentioned above). The runtime object stores the current state of animation of that particular game object, while it refers to one or animation clips to have access to the common animation definition data. The clue now is that when the animation sub-system is running during the progress of the game loop, it will alter parameters of some runtime objects (besides animation runtime objects). This may be a 3D skeleton pose, a sprite attachment, or whatever. Notice that a skeleton also has a runtime object besides a defining resource.
After running the animation sub-system, all animated game objects are again still for the remaining time until the game loop wraps around. A subsequent (CPU based) skinning process computes a new mesh. For the rendering sub-system there is no difference between animated and non-animated game objects, because the rendering just looks at belonging parameters and finds a mesh, a sprite, or whatever.
[…] What kind of files do you have? I guess I can create my own file formats for animated meshes (barbarian.mesh, barbarian.skel, barbarian.animdata) but is it really needed?
The file representation is detached from the in-memory representation, because the requirements are different. Okay, the file has to store data which later occur as resources. But whether they are stored in individual files or archive files, whether they are compressed or not, whether they grouped into load bundles, … is a question of the resource loading sub-system which itself is a lower level part of the resource management system.
It is not necessary to create an own file format as long as you are well-pleased with an existing one. As soon as you want to gain some loading performance by using in-place loading, support load bundles, support streaming, be interested in a unified resource loader, obfuscating your resources, … you probably need to define your own format, or look out for useable file formats specifically made for game content.
Posted by haegarr on 24 August 2015 - 08:17 AM
Well, software patterns are somewhat generic by definition; otherwise they would be available as library. Besides that, architectural patterns like MVC, MVP, MVVM, and the more advanced ones are actually what to look for desktop application, including scientific ones. Those patterns are about the separation of business data, their representation, and their manipulation. I suggest you to look for comparisons, because such comparisons should hint especially at typical use cases. Nevertheless, don't forget that patterns are just guidelines; don't hesitate to diverge when appropriate.
Totally unrelated from the GUI architecture is the question about the business data management. You should avoid to store original and derived data into the same object. Treat it like variables in a programming language: You have a variable with the original data, you apply an operator, and yield in a result that is stored in another variable. This is fine because you don't know how which operator will be applied, how often an operation will be applied, or to which data they will be applied. So you need to provide most flexibility to that storage system. May be an operator is allowed to overwrite its source (see below); but the general case of writing to a new variable should ever be available, and it must be available if the format of the output is different anyway.
Regarding the operators themselves … it depends. Do you need a history of applied operations? Need an undo be supported? Do you need macros / operation recording? Should the operations be re-applied if input data changes? Do you need a type system to distinguish data types?
Posted by haegarr on 24 August 2015 - 06:22 AM
Should it be like that ? I've tried so many modifications to my shaders and played with them but could not get what i want..
The texture and alpha should be as when the object is fully lit. It is the light that makes a scene bright or dark, not the scenery.
Then in the shader compute / sample a light value (some gray, usually, where black means unlit and white means full lit) and multiply (component by component) that light value with the texture color value. Regardless of the texture color value, when multiplied with the extreme 0,0,0,1 (no light) will yield in black, and multiply with extreme 1,1,1,1 (full light) will yield in the texture's color; anything in-between will yield in shades of the texture color.
Posted by haegarr on 21 August 2015 - 06:57 AM
So, where should I start ? Can anybody give me a path to follow that the final destination is an RPG like The Legend of Zelda: A Link to the Past ?
You said something about Tetris. How can I start with tetris? Should I use Unity2D for that? I have no idea.
As L. Spiro stated, don't start with the desired project! Perhaps even don't start with a real project at all. If you would start unprepared right into the desired project you'll get frustrated very quick due to all the unknown nitty-gritty that need to be handled, and that will definitely jeopardize the project.
Since your goal is to finish a game, using an existing not just engine but tool like Unity (or Unreal, …) is IMHO the way to go. There are plenty of (video) tutorial for Unity and Unreal. Do not just look at them but get own experience by reenacting them (do not restrict yourself on tutorials related to RPG stuff here). So you get a feeling for the tool and how things are expected to work within. After doing so for some time, start to bring in own ideas / variations. Then start an own small game project. And only after that has been finished (need not be polished but, well, playable), plan out your desired project with your then existing experience and finally go for it.
Just my 2 cents, you know