Jump to content

  • Log In with Google      Sign In   
  • Create Account

haegarr

Member Since 10 Oct 2005
Online Last Active Today, 12:51 PM

#5244600 Curved and sloped 3D tiles

Posted by haegarr on 05 August 2015 - 01:11 AM

Tiles need not be flat, also not at its perimeter. Make sure that neighbored tiles have the same kind of edge at the common seam, i.e. they use the same sequence of intermediate vertex positions. If you do this in the manner of module assembly you still have a limited set of shapes what probably makes the level design easier.

 

Some details to be considered are:

 

* There need to be a height information for every point where a dynamic game entity can be placed.

* In dependence on the camera height and pitch as well as the modeled slope, depth test and/or backface culling may be needed.

* The camera height need to be adjusted if the player character impends to leave the upper display edge.

* The simple top-down texture mapping looks ugly if the slope gets stronger due to texel stretching. You may support extra texture tile sizes.




#5244504 Collectable Items, Persistence and ECS Architectures

Posted by haegarr on 04 August 2015 - 09:17 AM


Is this is a sensible approach? Is this something that should sit outside of the ECS? [...]

IMHO your approach is fine because it allows the level designer to deal with the problem in a consistent way (assuming that s/he is familiar with ECS, of course ;) ) and it seems to fit smoothly into the technical point of view, too.

 

The only issue I personally would have with that particular solution is that the term "persistence" is broader. I.e. if I would hit a Persistence component I would understand it as a (perhaps even) collection of attributes that persist a room switch. It seems not clear why "persistence" 

 


[...]How have other people tackled this problem?

Well, I do not have this exact problem because in my engine the overall state is persistent anyway. So it includes the persistence but without the need of an explicit Persistence component. A spawn point that is never disabled and triggered on a "entering room" signal then will spawn each time, and one that is disabled does not.




#5244457 Normal map, height map, bump map... are all the same thing?

Posted by haegarr on 04 August 2015 - 03:04 AM

Most things are already mentioned in the posts above. Well, I think there need perhaps to be a little more accentuation on the difference of a map as a parametrization and the mapping as the applied technique.

 

A height map is an array of pixels with a single channel (hence it appears as grayscale image), where each of the pixels denote an elevation height w.r.t. a reference surface. It can be used with the bump mapping technique (and hence the map itself is also called a bump map) to simulate bumps and wrinkles on a surface without changing the geometry of the surface, not even temporarily. It can also be used for the displacement mapping technique where geometry is actually changed. Because a height map has only one channel, the displacement is restricted (usually along the normal of the surface onto which the mapping is applied). When applied to terrain (originally a flat horizontal surface), the bump map is sometimes also called an elevation map.

 

A full displacement map can be used, too, so that 3 channels are available, e.g. one for each of the directions normal, tangent, and bi-tangent to the surface.

 

A normal map is a map similar to a full displacement map, but instead of an displacement offset there is just a direction stored in the 3 channels. It cannot be used for displacement, because it lacks a distance. It can. however, be used to simulate surface bumps and wrinkles with the normal mapping technique. In fact, when doing bump mapping you need to compute the normal distortion from the gradient of the bump map pixels, and hence more or less convert the bump map into a normal map on the fly.




#5244247 OpenGL- Render to texture- a specific area of screen

Posted by haegarr on 03 August 2015 - 01:24 AM


does i still have to work with projection matrix?

Yes, you still have to do a projection. In this concern an FBO is not different from the default framebuffer. Moreover, you also need to find the suitable view matrix.

 


can you give me some trick how to do so and which way should i follow exactly?

As already mentioned, you need the full MVP matrix. You can take the situation as an own scene, i.e. the sheet of paper is the only object in the world. That allows to set the camera to a standard position and orientation and to place the object somewhat in front of i onto the z axist. As a result both M and V are easy to build, where V is just the identity matrix or perhaps just some scaling.

 

Regarding P you need to know the left, right, top, bottom, near and far planes of the view volume, all this in view space (which is, if following the above, the same as the world space perhaps with the exception of scaling). Now you can build P by either setting element by element (see e.g. here for details), or use one of the usual glm or OpenGL / GLU routines. 

 

Its not clear to me how far you have progressed, and which kind of projection you use. In which co-ordinate system have you calculated the red borders?

 


im sorry for my Novice questions.

There is nothing that need to be excused here :) 




#5244157 Simple inheritance

Posted by haegarr on 02 August 2015 - 09:34 AM

what shouldn't be happening?
he said, " It only sees the implementation if I include the .cpp file of the class A" that had the definition. No surprise there.

The reported linker errors shouldn't happen, and including an implementation file from an implementation file should not be done in general. It hints at a wrong project set-up. Either the sense of partial project compilation is lost, or else (legitimate) linker errors occur as soon as the base class is inherited from more than 1 derived class.

 

probably looking for something like this.
B myB;
myB.A::foo();

Although this is a valid invocation, doing like so is (a) not necessary for the given problem situation, and (b) hides a potential problem. Such invocation states explicitly that A::foo should be invoked even in the case that B::foo exists! That implies that the client has a deep understanding of the inner working of the class hierarchy; woe betide anyone who does this invocation without good reason. The situation that B does not override A::foo is definitely not a good reason alone. So mentioning this kind of "solution" should be done with the respective warning what actually happens.

 


and btw, when you implement a derived virtual, it doesn't 'overwrite' the base implementation. It will still be there.

With respect to the C++11 keyword, an allowed term would be "overriding".




#5243895 Hierarchy of meshes and entities

Posted by haegarr on 31 July 2015 - 03:02 PM

I'm not particularly familiar with Ogre, so I'm not necessarily a good reference here. To be honest, looking at the interfaces of Ogre::Entity and Ogre::Mesh, for example, does not make me happy from an architectural point of view.

 


I guess that Entity holds its own hierarchy of nodes and sub-meshes attached to them [...]

Perhaps, but it may be different. Normally a sub-mesh is defined as a separate set of vertexes for which another material is used, so this leads to another draw call. A weapon, on the other hand, is an own entity: It may be held in a hand but also dropped down onto the floor. This is different from a sub-mesh.




#5243793 OpenGL- Render to texture- a specific area of screen

Posted by haegarr on 31 July 2015 - 08:03 AM

You have to set the projection and view matrices so that "your camera sees what you want to render" into the texture. You have to use glViewPort and perhaps glScissor to which area of the texture rendering goes. 




#5243792 Hierarchy of meshes and entities

Posted by haegarr on 31 July 2015 - 07:57 AM


Why does it matter? I thought it will be more easy for the "engine user" to refer and work with just names and not indices.

It matters because string comparison costs around an order of magnitude more performance. If you want strings at runtime, then you can compute the hash of the string once and hand over both the string and the hash value for example in a "Name" structure.

 

BTW: Hashes are not indices.

 


It is recursive. Here's are the implementations:

It is not a mistake or so, but it tastes curious that "findSceneNode" checks the own node's name. Personally I'd let the routine test only child nodes and that directly.

 


Also what are node paths and why would I need them?

Each node need to be uniquely addressable. Because your search is recursive you need names not only unique within the couple of direct child nodes but over the entire scene graph. One possibility is to mimic what hierarchical file-systems do: Using a path like "/root/child_at_1st_level/child_at_2nd_level/my_node_of_interest". That would be a node path, because it addresses the node of interest using the names of the parents instead of constructing a name like "my_node_of_interest_with_a_super_long_and_hence_hopefully_unique_name". Notice that especially naming sub-nodes would be close to impossible as soon as several instances of a model are set into the scene if not using a path like scheme.

 


[...]The logic behind having more than one mesh attached to a SceneNode is for attaching swords to hands, MAGs to tanks etc., so when the hand rotates so will the sword.[...]

That would not be possible. Such things need to be child nodes because they need their own transform (local-to-parent transform) which is definitely part of the node but not the mesh.

 


Why would I care about adding order of child nodes?

As said: Its okay the way you do it as long as you do not need an order. I just wanted to hint at that.

 


Where else can I load the data from storage?

The resource management should have a loader object for this kind of thing.

 


I actually have Caches of materials inside Mesh. Like Cache etc.

Well, if your mesh class is actually a model then it is probably okay besides that another name should be chosen. (An actual mesh should not need to know about materials.)




#5243342 From openGL to various 3D format

Posted by haegarr on 29 July 2015 - 02:53 AM


1)in my code i'm using some Glut built-in methods and function to create object such as glutSolidTeapot() , glutSolidCube() etc .. I would like to know if i can "export" them to one of the formats named up above. If not , should i build my own functions to display them , so i can have the vertices position and save them in the files.

I do not know a way to ask the models from OpenGL besides interception. You can download a GLUT source code from internet, e.g. here, and extract the mesh data (consiering any copyright, of course).

 

But I also do not see what this should be good for. Generating a cube is easy, and the teapot (like any hundreds of other meshes) can be loaded from the internet (e.g. here as OBJ).

 


2)i'm already done with the obj parsing , but i have no clue on how the PLY and VRML files behave , i could use a little help here please , thanks !

The specifications can be found on the internet, e.g. here for PLY. What exactly do you want to know?

 

However: I'm surprised of your intent.

a) Does PLY really play a role nowadays?

b) VRML is superseded by X3D. Besides that … VRML / X3D is a beast! 

c) OBJ and PLY are both supported by AssImp for both import and export.




#5243174 Hierarchy of meshes and entities

Posted by haegarr on 28 July 2015 - 08:44 AM

So I could have a SceneManager class, with all the object in a single array, and then the separated scene graph hierarchy will have nodes that only point to the objects in the SceneManager class?

Something along this, yes.

 

See, there are many possible ways to structure the world. Spatial vicinity, several parent-child relations, static vs. dynamic objects, groups of belonging colliders, … whatever. The solution is not to try to create the one omnipotent structure, trying to fulfill all needs but ending in a solution that fulfills each need to 80% at best. Instead each sub-system should drive its own structure that is best suited to do its job. That means to have say an octree to solve spatial vicinity problems, several dependency trees, perhaps several groups of belonging colliders, and so on. That is what L. Spiro says e.g. with "a scene graph is not for rendering".




#5243122 Texture Issues With Unusual Internal Formats

Posted by haegarr on 28 July 2015 - 01:28 AM

There are many things that may go wrong. The first candidate coming to my mind is the pixel unpack alignment. With each texel being 2 bytes in size and the chance that you transfer an odd number of texels per row, the default alignment of 4 will not match. Have you considered glPixelStorei(GL_UNPACK_ALIGNMENT, 2) yet? Alternatively, do you do some padding?




#5243011 How do you manage assets?

Posted by haegarr on 27 July 2015 - 12:00 PM

This question has been discussion many times already. Searching for "asset manage" or "resource manage" will give you plenty of hits. There you can find much about caches and loaders. However, many if not most of them suggest to manage assets separated by type. Why exactly do you feel it being wrong? 




#5243000 compositing images.

Posted by haegarr on 27 July 2015 - 10:47 AM


Hmm, a 2D skeletal system? I didn't consider this, [...]

Its not exactly a skeleton, because a skeleton animates bone by bone, while the suggested system animates the armature at once. It is closer to sprite animation but supports exchangeable parts.

 


[...] iirc spline is a tool that can do this. Might give this a shot, thanks for the suggestion.

Its named Spine.



#5242982 compositing images.

Posted by haegarr on 27 July 2015 - 09:37 AM

Rendering such a character and crafting it (i.e. a tool and/or workflow to do so) are two different things.

 

You can animate / render such a character as a cut-out figure with an armature. Let's say there is an object of the Figure class. The object provides a pointer to an instance of the Armature class as well as a couple of slots, one for each cut-out graphic. An armature is just a spatially arranged set of 2D anchors. For each (possible) anchor there is a cut-out graphic slot available within the Figure object. Each cut-graphic has a 2D pivot which is thought to spatially match the corresponding anchor in the armature. At every turn, the animation system picks the current Armature instance and links it into the Figure object. Rendering is done by iterating the filled slots in order and drawing the belonging graphic at the placement as described by the corresponding armature anchor.

 

In fact the above thing is similar to a 2D skeleton animation except that not bones but the entire pose is written by the animation track. However, it allows to combine the graphics independently from the animation. It further allows to animate the graphics in flip-book manner, too; just let another (selected) animation track set the cut-out graphic slot in the Figure object.




#5242909 Rotating a sprite toward mouse [SFML]

Posted by haegarr on 27 July 2015 - 01:29 AM

Are mousePos and shipSprite.getPosition given in the same space? I.e. are both in screen space or are both in world space? The mouse position is driven by sf::Mouse::getPosition which results in window co-ordinates. On the other hand, a sprite position is usually given in world space which may differ dramatically from screen space. Now, because you compute the difference of the two positions, you need to make sure that both are in the same space.






PARTNERS