Jump to content

  • Log In with Google      Sign In   
  • Create Account


haegarr

Member Since 10 Oct 2005
Online Last Active Today, 07:06 AM
*****

#5156915 Resource managers: how much is overkill?

Posted by haegarr on 30 May 2014 - 02:31 AM

I'm not sure whether I understand why exactly the OP is speaking of overkill. It seems me that using a package name within the resource name is making the problem. If so, the following description may help; if not … feel free to ignore my post ;)

 

When the system starts up, it scans a specific directory (the "overlay" directory) for files, and assumes each file to contain a single resource. It stores the names of the files as names of resources in an internal table of content. I'm using hashes for resource names at the runtime level, but the file-path names relative to the directory would work as well. The system further scans another directory (the "package" directory) for files, and each file determined to have the correct format is opened and its stored table of content is integrated with the one already in RAM. During this, if a resource name collides with one already in the internal TOC, it is ignored and the internal entry left as is; additionally, if the already contained entry refers to a package as source, a conflict is logged.

 

The table of content now has entries for each single resource. The resources' names managed therein are not tagged by a package name or whether they origin from the overlay directory. But these additional informations are stored along the resource name in the entries, of course.

 

A file offset and length is stored in the TOC for accessing a resource inside the package file. Now, the file offset and length does not necessarily address a single resource but may address a sequence of resources. This is called a load unit, because all addressed bytes are loaded (and later unloaded) at once. Nevertheless, the entries in the TOC are still names of individual resources. So requesting any resource of a load unit causes all resources in that load unit to be loaded. A "bill of materials", as may be stored along a resource to declare dependencies, ever lists each single resource.

 

Of course, before the loader is requested to load a resource load unit, the cache is interrogated whether a resource is already loaded.

 

With a concept like the one above, resources are requested by name and regardless of their storage location, and regardless of whether they are bundled in a load unit or stored in their own load unit. It allows for hot-swapping during development and for installing updates without the need to send an entire package file. All dependencies are still explicit on a resource by resource basis. If for some reason the toolchain decides to store a particular resource now in another load unit, the bills of material need no update.




#5156257 OpenGL Texture Coordinates

Posted by haegarr on 27 May 2014 - 05:03 AM


I'm using this tutorial pretty heavily, and its section on textures is confusing me to some degree.

Don't know what exactly is confusing you, so let me try to shed some light onto the entire thing. Some of the following is probably already known by you, but I need to mention it for completeness.

 

1.) You need a vertex data stream with the vertices' position. Engine often batch sprites to reduce draw calls. This requires the all vertex positions, although coming from different sprites, to be specified w.r.t the same space, e.g. the world space or the view space. Hence any motion applied to the sprites is already to be considered on the CPU side, before a VBO is filled.

 

Okay, you can use instancing and hence handle things another way, but that is an advanced topic.

 

2.) You need a vertex data stream with the vertices' uv co-ordinates. If you have only one texture to deal with, you need just one uv stream. If you have several texture, you may need more than a single uv stream. But it is also possible to use one and the same uv for several textures (e.g. when using a color map and a normal map with the same layout in texture space).

 

3.) For a sprite you usually don't use normals, because sprites are just flat (letting some exotic variants aside). Otherwise, if normals are available, you need a vertex data stream for them, too.

 

4.) Whether you use one VBO per data stream, or put all of them into a single VBO, is usually a question of how dynamic the data in each stream is. For example, sprites are often computed frame by frame and transferred to the GPU in a batch. When the CPU computes both the vertex positions and uv co-ordinates on the fly, then both streams are dynamic and can be easily packed into a single VBO. On the other hand, if the CPU computes just vertex positions but re-uses the uv co-ordinates as they are again and again, then the vertex position stream is dynamic but the uv co-ordinates stream is static; this would mean 2 different VBOs when looking at performance.

 

5.) However, it is crucial to the success that there is a one to one relation in the sequence of vertex positions and uv co-ordinates. On the GPU the position at index #n and the uv co-ordinate at index #n (as well as any other vertex data) together define vertex #n. That means if you load a model and drop uv co-ordinates for the sake of computing them afterwards, you have to ensure that the order you push uv co-oridnates into the buffer is absolutely the same as before.
 
That said, if you have simple geometry like sprites, it is IMHO better to generate both geometry and uv co-ordinates on the fly. On the other hand, if you have complex model geometry with uv co-ordinates delivered aside, then don't drop the latter but apply calculations on top of them if needed.
 
6.) With glVertexAttribPointer you make each vertex data stream known to and hence useable by the vertex shader.
 
7.) Samplers are the things that allow a shader to access a texture. To get that right you need to fill texture memory with texel data (e.g. using glTexImage2D as you do) with a specific texture unit being active (default is #0), and to tell the shader what texture unit to access with which sampler (the glUniform1i routine with the address of the sampler as target).
 
8.) There are 2 ways to configure samplers. Historically there is the setting within the texture itself, using the glTexParameteri routine as you do. This is not totally right, because the parametrization belongs to the access of the texture but not the texture itself. Hence there is a new way where parametrization is done on the samplers themselves. However, you do it the old way and it works fine so far, so let it be.
 
9.) Inside the vertex shader you have the uv co-ordinates of the current vertex, and a sampler where a texture is bound to. The uv co-ordinates are prepared so that they are ready to use. The shader calls the sampler with the supplied uv co-ordinate tuple, and gets back an RGBA value.
 
 
What you can see from the description above, there is always a unit on the GPU, be it a vertex input register, a texture sampler unit, or whatever (yes, there are more), and you have to tell both sides OpenGL and the shader which units you want to use.



#5156235 OpenGL Texture Coordinates

Posted by haegarr on 27 May 2014 - 02:52 AM


Well, it's the sprite sheet specifically that I'm struggling to find information on. As I mentioned, the information I've found has been for previous opengl versions. While many current version tutorials specify how to load, bind etc they, in my search, have passed over dealing with texture coordinates in opengl 4. As far as "changing the texture coordinates" I simply mean changing them from the default which displays the entire image to instead crop a specific set of coordinates.
There is no default in the sense of OpenGL. Each vertex needs an explicitly given pair of u,v co-ordinates. Assuming that the sprite is rendered as a quad, you have 4 vertices, and all of them have their own u,v pair. To map the entire texture once, supposing the use as GL_TEXTURE_2D, you explicitly use the entire span of [0,1] for u and [0,1] for v, yielding in the tuples (0,0), (0,1), (1,0), and (1,1) at the corners of the quad. Any subset of the texture has co-ordinates inside these spans. E.g. u being in [0,0.5] and v being in [0,1] denotes half of the texture as a vertical strip, addressed in the vertices as (0,0), (0,1), (0.5,1), and (0.5,0).
 
When computing such relative co-ordinates, one needs to consider some things. OpenGL used u co-ordinates from left to right, and v co-ordinates from bottom to top. The co-ordinate 0 means the left / lower border of the leftmost / bottom texel, and the co-ordinate 1 means the right / upper border of rightmost / top texel. With this in mind, if you want to address the texel with indices s in [0,w-1], t in [0,h-1], where w and h are the dimensions of the texture measured in texels, you have to use
    ul := s / w      for the left border of the texel
    ur := ul + 1 / w      for the right border of the texel
    um := ( ul + ur ) / 2      for the center of the texel
and analogously for v when using h instead of w.
 
So, when the sprite's texels are in a rect with the lower left corner (s1,t1) and the upper right corner (s2,t2), you compute
    ul(s1), vb(t1), ur(s2), vt(t2
and use them as u,v co-ordinate tuples for the vertices:
    ul(s1) and vb(t1)
    ul(s1) and vt(t2)
    ur(s2) and vt(t2)
    ur(s2) and vb(t1)
 

But, simply telling me to search "opengl 4 tutorial" seems to be less than helpful to me. I was under the impression this was a beginners forum. They could probably pass along that advice to a sticky and do away with the forum entirely I suppose. Or, let us ask our inane questions, because there is a lot of information out there, and sometimes asking in a forum to parse that down to something useful can be helpful to someone who is in the process of learning.
Excuse for my rashly answer above; but: Computing texture co-ordinates is independent on the OpenGL version. Passing vertex data (with texture co-ordinates being part of) is dependent on OpenGL version. The former topic, now that it's clear what "change" means in the OP, is declared above. The latter topic could be found in tutorials (you already have). Most of the OP deals with VBOs and vertex data passing, hiding the actual question. Just to my excuse. smile.png



#5156225 OpenGL Texture Coordinates

Posted by haegarr on 27 May 2014 - 02:08 AM


I've scoured the forums(and quite a bit of the internet-realm) but can only seem to find out of date ways of handling this. I apologize if this is a repeat question, but all of the threads I've found on this are quite old, or at least old enough they don't seem to apply.

Start the internet wide search engine of your choice and try "opengl 4 tutorial", and you'll get at least 3 hits on the first page already suitable to answer how to deal with textures in OpenGL nowadays.

 

Regarding sprite sheets (or texture atlases, to be precise): This is something that is independent on the OpenGL version. Any good tutorial should tell how texture co-ordinates are to be interpreted, and how the correlation between vertex position and texture co-ordinates is.

 


… the method of changing the texture coordinates ...

I don't understand what "change" means here exactly. Compute the correct vertex positions and texture co-ordinates on the CPU side, store them into a VBO, pass it to a shader, and use them to draw.




#5155619 Entity-Component System Implementation

Posted by haegarr on 24 May 2014 - 04:08 AM

First of:

 

1.) There is no "the right way" in using component based entities. Having no explicit entities or concentrating behavior in systems are not necessarily characteristics of ECS.

 

2.) In OOP you have 2 ways of building your classes: Inheritance or composition. Inheritance was new with OOP, and so it was often represented as panacea. However, many many times composition is the better way to go. ECS is just a buzzword that means composition to create game objects. That does not mean that inheritance is to be dropped, and it does not mean that everything and all should be pressed into the ECS scheme. 

 

That said, let's look at your questions:

 


… Whats the proper way to have the components interact with each other? ...

You want to establish an ECS using sub-systems. So components do not interact with each other in any way. Instead, sub-systems work on the components. They have access to the components managed by themselves, and they have access to other components by requesting other sub-systems.

 

Components of a specific type belong to a specific sub-system. E.g. the Placement component (an entities position and orientation in the world) belongs to the SpatialServices sub-system. There is no real need to have an own sub-system per component type. E.g. the bounding volume may be managed by the SpatialServices, too. If an entity is instantiated, an ID is generated for it, its component descriptions are interpreted, the belonging sub-systems resolved, the component description (along with the ID) handed over to the found sub-system, so that a new component instance is allocated and initialized therein. (Alternatively and perhaps easier to maintain: All existing sub-systems can be iterated and the entire description can be handed over, letting the sub-system look for belonging component descriptions.)

 

Because each sub-system stores the belonging components with respect to the entity ID, a sub-system can request foreign components from other sub-systems just by the ID. However, I've named sub-systems "services" because they may provide more sophisticated accessors as well. Looking again at the SpatialServices, it is a good place where to implement spatial proximity, collision, and containment requests. For this purpose the SpatialServices may internally use e.g. an octree structure. A client may request "give me a list of IDs for all entities where the bounding volume are at least partly contained in the overhanded test volume". So rendering may be supported (frustum culling), physics may be supported (collision detection), AI may be supported (proximity / sensing), and perhaps others, by implementing more or less general requests in the SpatialServices sub-system.

 

An advantage of sub-systems is that components naturally have a "me" centric view, while a sub-system sees all components (of a specific type) side by side. For example, when two dynamic entities collide, a sub-system sees a collision between entities A and B, but from a components point of view component A reports a collision with B and component B reports a collision with A. This is because sequencing of components is a feature of the sub-system but not of the components themselves.

 

Question now is, how does a sub-system get access to the other sub-systems? IMHO there is nothing wrong with using a singleton if you really be sure that there will be at most one single instance. However, when looking at the amount of sub-systems, I prefer to instantiate them and then link them to the project (or "game" if you prefer that term) structure; the resource libraries are linked there, too.

 


… Ex. A render component will need a position component to know where to render to correct? And a potential Fustrum Culling method might need both? ...

This is in general already answered above. However: A render sub-system (not the component, see above) is likely be implemented in layers. The upper layer performs view culling, collects all necessary data, and generates rendering jobs. The rendering jobs are enqueued, because they often are sorted depending on the rendering pass (opaque, translucent, depth-first, whatever) w.r.t. optimization by reducing state switching. The jobs are then processed by the lower layer, typically an abstraction of the underlying graphics API. If doing so you have both: The upper layer requests other sub-systems, while the lower layer is supported with the necessary data.




#5152488 Calculating the Final Vertex Position with MVP Matrix and another Object'...

Posted by haegarr on 09 May 2014 - 02:35 AM


I was calculating the normal matrix against the entire MVP matrix.
 
Would I calculate this matrix instead again the ViewProjection matrix?

How exactly the normal matrix is to be computed depends on the space in which you want to transform the normals. Often normals are used in eye space for lighting calculations. This means that the projection matrix P does not play a role when computing the normal matrix.

 

Normals are direction vectors and as such invariant to translation. That is the reason why the rotational and scaling part of the transformation matrix is sufficient to deal with. So let us say that

    O := mat3( V * W )

defines that portion in eye space. (Maybe you want to have it in model world space, in which case you would drop V herein.)

 

The correct way of computing the normal matrix is to apply the transpose of the inverse:

    N := ( O-1 )t

 

In your situation you want to support both rotation and scaling (as said, it is invariant to translation, so no need to consider translation here). With some mathematical rules at hand, we get

    ( O-1 )t = ( ( R * S )-1 )t = ( S-1 * R-1 )t = ( R-1 )t * ( S-1 )t

 

Considering that R is an orthonormal basis and S is a diagonal matrix along the main diagonal, this can be simplified to 

    N = R * S-1

 

From this you can see that ...

 

a) … if you have no scaling, so S == I, then the normal matrix is identical to the mat3 of the model-view matrix.

 

b) ... if you have uniform scaling, i.e. scaling factor in all 3 principal directions is the same, then the inverse scaling is applied to the normal vector, which can also be seen as a multiplication with a scalar:

     S-1 = S( 1/s, 1/s, 1/s ) = 1/s * I

 

This is the case mentioned by Kaptein above. The difference is that Kaptein mentions that instead of ensuring the length of the normal vector by the matrix transformation itself, you can use the mat3 of the model-view matrix as normal matrix and undo the scaling happening by this simply by re-normalization of the resulting vector.

 

c) … if you have non-uniform scaling, i.e. scaling factor in all 3 principal directions is not the same

     S-1 = S( 1/sx, 1/sy, 1/sz )

then you have no chance but either compute the transposed inverse of O or else use compositing of R and S-1 (if those are known).



#5152486 Calculating the Final Vertex Position with MVP Matrix and another Object'...

Posted by haegarr on 09 May 2014 - 01:49 AM

1.) Situation A: We have sub-meshes. This is usually the case if the model has more than a single material and the rendering system cannot blend materials. So the model is divided into said sub-meshes where each sub-mesh has its own material. Further, each sub-mesh has its local placement S relative to the model. If we still name the model's placement in the world M, then the local-to-world matrix W of the sub-mesh is given by

    W := M * S

 

2.) Situation B: We have sub-meshes as above, but the sub-meshes are statically transformed, so that S is already applied to the vertices during pre-processing. The computation at runtime is then just

    W := M

 

3.) Situation C: We have models composed by parenting sub-models, e.g. rigid models of arms and legs structured in a skeleton, or rigid models of turrets on a tank. So a sub-model has a parent model, and the parent model may itself be a sub-model and has another parent model, up until the main model is reached. So each sub-mesh has its own local placement Si relative to its parent. Corresponding to the depth of parenting, we get a chain of transformations like so:

    W := M * Sn * Sn-1 * … * S0

where S0 is the sub-model that is not itself a parent, and Sn is the sub-model that has not itself a parent.

 

4.) View transformation: Now having our models given relative to the global "world" space due to W, we have placed a camera with transform C relative to the world. Because we see in view space and not in world space, we need the inverse of C and hence yield in the view transform V

    V := C-1

 

5.) The projection P is applied in view space and yields in the normalized device co-ordinates.

 

6.) All together, the transform so far looks like

    P * V * W

(still using column vectors) to come from a model local space into device co-ordinates.

 

Herein P changes perhaps never during the entire runtime of the game, V changes usually from frame to frame (letting things like portals and mirrors aside), and W changes from model to model during a frame. This can be realized by computing P * V once per frame and using it again and again during that frame. So the question is what to do with W.

 

7.) Comparing situation C with situation A shows that A looks like C for just a single level. Situation A would simply allow for setting the constant M once per model, and applying the varying S per sub-mesh (i.e. render call). Using parenthesis to express what I mean:

    ( ( P * V ) * M ) * S

 

But thinking in performance of render calls where switching textures, shaders, and whatever have their costs, especially often switching materials may be a no-go, so that render calls with the same material are to be batched. Obviously this contradicts the simple re-use of M as shown above. Instead, we get a situation where for each sub-mesh the own W is computed on the CPU, and the render call gets

    ( P * V ) * W

 

BTW, this is also the typical way to deal with parenting, partly due to the same reasons.

 

Another drawback of using both M and is that either you supply both matrices also for stand-alone models (for which M alone would be sufficient), or else you have to double your set of shaders.

 

So in the end it is usual to deal with all 3 situations in a common way: Computing the MODEL matrix on the CPU and send it to the shader. (Is just a suggestion) ;)




#5152311 Calculating the Final Vertex Position with MVP Matrix and another Object'...

Posted by haegarr on 08 May 2014 - 07:54 AM


The question is now each object in question has their own rotation, translation, scaling, etc. separate from the MVP for the screen.

The abbreviation MVP means a matrix composed of a model's local-to-world transform (M), the world-to-view transform (V, also known as inverse camera transform), and the perspective transform (P). So there is no need for another model transform, because there is already one there (namely M).

 

What should ObjectOffsetMatrix bring in what is not already contained in M?




#5151855 Why does this work?

Posted by haegarr on 06 May 2014 - 12:06 PM

As far as I understand it:

 

1.) window.requestAnimFrame is statically initialized with the first match found during looking up a couple of (browser specific) setter for a timed callback.

 

2.) Function animate() sets self as the function to be called when the next 1/60 second has gone, and when this setter returns the background is rendered. I don't see why this should not work. 




#5151608 Replacing glMultMatrixd with Shaders ...

Posted by haegarr on 05 May 2014 - 07:28 AM


… how to replace the glMultMatrixd functionality in shaders ...

Basically you don't do so at all. glMultMatrix, like all the other matrix routines, was used to compute a matrix on the CPU side. The resulting matrix then was sent to the shaders. To mimic this, you need to use a matrix library like e.g. glm and send the result as an uniform to the shader.

 

Of course, you can do the multiply inside the shader (as Aliii has shown above), and for some use cases this would be the way to go. But the vertex shader will do the product one time for every vertex it is processing, while (as long as matrices on the old matrix stack are concerned) they need to be computed at most once per model.

 

Hence the usual way is to use a matrix library, compute matrices on the CPU side, send it as uniform (stand alone or in a UBO) to the shader, and use it in the shader as is. As said, special use cases will need another way to go.




#5151412 Resource Cache and Models

Posted by haegarr on 04 May 2014 - 07:28 AM


You probably want to be carefull in how you construct your load units, so that they load only what you need got that unit. Dependency checks in your bundle build step can alleviate these issues, so that you only bundle resources that are often used together. This will remove your problems in that the load unit has to be loaded completely.

You are right, except that loading bundles as an entirety is not a problem but a feature. Its sense is to reduce mass storage accesses (assuming that not all platforms have an SSD). A load unit may of course contain just a single resource, but it may also (as an example) contain the mesh plus necessary textures plus a skeleton plus animations for a model, however you want the toolchain bundle the resources.




#5151379 Resource Cache and Models

Posted by haegarr on 04 May 2014 - 03:18 AM

I've implemented 2 ways for handling this. The first way is like what Hodgman has mentioned above: It works when references to other resources are found during the interpretation step. The other way is that of bundles. An archive file has a table of content for each single stored resource as usual, but the table of content refers to so-called load units instead of resources directly. A load unit is referred to by just a file offset and byte count and denotes the section of the file that has to be loaded when the corresponding resource is requested. So a load unit may store a single resource, but it may be also store a sequence of resources (the said bundle). Because all entries in the table of content which denote one resource of a bundle show the same load unit, requesting such a resource causes the entire sequence of resources to be loaded. So load units are loaded (and unloaded) ever in their entirety.




#5151164 Help me with that error please!

Posted by haegarr on 03 May 2014 - 05:41 AM

I still not sure what to do. if you could explain me in more details I will be very glad.

Telling you what exactly to do requires knowledge of your project set-up. Moreover I'm not a Windows programmer but one working under Unix.

 

However, in principle you need to implement a function WinMain (because you're building a Windows executable). The legacy NeHe tutorials have this function explained in tutorial 01, and I assume that the tutorials you are following have this, too. Later tutorials may have dropped it for clarity, but when you download the tutorial instead of copying it from screen you probably found WinMain therein. I don't know which tutorials / source you are following, so I cannot hint you exactly where to look.

 

EDIT: Being ninja'd twice… ;)




#5151152 Help me with that error please!

Posted by haegarr on 03 May 2014 - 03:55 AM

The "entry point" is also known as main function. Under Windows this is usually WinMain(…), while under unixes it is usually main(…). Without such a function anywhere within your code, the compiler does not know what to call when the OS hands over program execution to your executable. Either you complete the shown code snippet by an appropriate main function, or else you need to compile it as object file and link it with another object file where the main function was compiled into. 




#5150172 algorithm for calulating angle between two points

Posted by haegarr on 28 April 2014 - 02:33 PM


Correction: you would need to add PI, not 2pi, to transform an interval ranging from from -pi..pi, to the range of 0..2pi. In general to convert from a signed -n..n, range  to an unsigned 0..2n range add 1/2 the range, 1n in this case, or 1pi in the situation above.

The transform I mentioned is not linear: With atan2 giving you angles in the range [+pi,-pi) where [+pi,0] is as desired, but (0,-pi) should be (2pi,pi) instead, you have to add 2pi for all resulting angles less than 0. As pseudo code:

    result := angle < 0 ? 2pi+angle : angle   w/   angle := atan2(-x, z)






PARTNERS