haegarr

Members
  • Content count

    4013
  • Joined

  • Last visited

Community Reputation

7385 Excellent

1 Follower

About haegarr

  • Rank
    Contributor

Personal Information

  • Interests
    Programming
  1. problem with coordinate system

    Applying MA . T . MA^-1 . MB means that vertices given in coordinates in relation to space B are transformed by T that is given in space A, and the result is given in world coordinates (assuming column vector notation, as usual). However, in the OP you ask for MX being the equivalent of T but within space B. Hence you need to resolve MB . MX = ..., so MX = MB^-1 . MA . T . MA^-1 . MB This step is not necessary if you want the vertices in world space anyway, of course.
  2. problem with coordinate system

    I'm not quite sure whether I understand your problem correctly. A) Generally one wants not to map particular vertices but the entire space. The common space, so to say, is the world space. So, using your matrix names, we have MB . MX = MA <==>. MB^-1 . MB . MX = MB^-1 . MA. <==>. MX = MB^-1 . MA or in words: from A to world and back to B. This makes sense only if MB is known and given. B) If you really want to use particular vertices and all but MX is known, then you have up to 6 unknown values in MX (it's 2D, isn't it?) With 3 vertices you have 6 equations, so it can be solved in general, too. However, usually you have some constraints on MX, for example being a composition of translation, rotation, and perhaps scaling, which gives you just 5 unknowns in 2D if scaling is not uniform along both axes, and just 4 unknowns for uniform scaling, and just 3 without scaling. In such cases you need to think of how to consider this.
  3. As Kylotan already mentioned, a game usually runs in another way than an application does. The surfaceview is a vehicle that the operating system is enforcing you to use. Nevertheless you can just use it as the driver to tick your own game loop. Such a game loop is usually build as a defined sequence of systems / modules / services / how ever you name it. All stuff that belongs to world updating (e.g. player input, animation, physics, collision resolution) is done early in the sequence. The graphical rendering is done at the very end of the sequence. Because the belonging modules are already updated, the world state is fix at the moment of rendering. Due to this, the view (understood as the presentation of what the renderer renders) is passive. If rendering is done all the time anyway, stuff like the level name, wave number, etc are just data values within the world. The renderer considers them simply by - well - rendering some text of other visual representation in its normal update phase within the game loop. It is not exactly polling. It is more like an immediate mode user interface stuff.
  4. This is strange, because the view is passive in all variants of model presentation patterns. It shows what is there, but it does not create something. A game world is created by the level loading system. The view is just the output media of the rendering system, not more. Moreover, the renderer system just renders the view as a presentation of the current state of the world, so the renderer and especially the view need not be notified about world state changes. The only necessary thing (dependent on the OS) may be to set the view to dirty, so causing a redraw on the next opportunity. I don't understand. Could you perhaps describe your problem by an example?
  5. Matrix math help wanted

    Just checked the orientation/scale matrix, and yes - looks like the inverse. As already written: With the definition m2 := anim * inv the matrix "anim" is computed as anim = m2 * inv^-1 i.e. multiply "m2" with the inverse of "inverse" on the right.
  6. Matrix math help wanted

    Ah, I forgot about the routine implementation as mentioned in the OP. Yes, that routine works as like the matrices would have a forth column of [ 0 0 0 1 ], so the assumption is correct. It just has left out the term where a multiplication with 0.0 would happen (because adding the resulting 0.0 would not change anything), and left out the multiplication with 1.0 (because that would not change anything of m[3][j]). So the routine actually implements a standard matrix product under the condition that the 4th vector is statically given as [ 0 0 0 1 ] and hence just not stored explicitly but known implicitly. Feed well known matrices into it and look at the result. E.g. feeding the identity matrix must result in the identity matrix if inversion is implemented.
  7. Matrix math help wanted

    The 4 matrices as given above cannot be the entire truth, because they are given as 3 columns by 4 rows matrices. First, a matrix need to be quadratic to be invertible (so here either 3 x 3 or 4 x 4). Second, a 3 x 4 matrix cannot be multiplied by a 3 x 4 matrix; in general a n x m matrix can be multiplied by a m x n matrix only. Presumably the full base matrix looks like (a 4th column added) 0.0, 0.0, 1.0, 0.01.0, 0.0, 0.0, 0.00.0, 1.0, 0.0, 0.0-0.29698506, 0.00000071708286, 0.86644721, 1.0 so that the upper left 3 x 3 matrix is a combined rotation and scale matrix, and the lower 3 x 1 matrix the position vector. Is that a correct assumption? However, the "inverse" matrix seems me not to be the inverse of the "base" matrix, because the product of their upper left 3 x 3 matrices isn't the identity matrix.
  8. Matrix math help wanted

    Mathematically it is m2 = a * i <=> m2 * i^-1 = a * i * i^-1 <=> m2 * i^-1 = a because i * i^-1 == I (the identity matrix) and a * I == a In other words, you need to compute the inverse of the matrix "inverse" (well, the name "inverse" for the one matrix is an unfortunate choose) and multiply it left and right of the equation sign AND on the same side of both terms. However, the original matrix is usually only reconstructed approximately because of numerical imprecision introducing small differences during matrix inversion and multiplication. Of course, it works if and only if "inverse" is invertible at all (but the name indicates that, doesn't it?).
  9. Help with creating a Layout Group

    IMO the simplest way is to determine the total required width first and then compute the starting position from this, then reposition all children, like so: 1) iterate the list of children and sum up their respective width; if necessary then count the children in this step 2) add N-1 times the gap width to the above sum, where N is the count of children 3) compute available width (of the container) minus the sum computed above (so you get the remaining space), and divide the result by 2 (so you evenly divide up the remaining space for the left and the right side); let's name the result as pos 4) iterate the children again; for each turn, set the position of the current child to pos, then add the width of the child plus the width of a gap to pos Of course, if too many children are in the group, the result of 3) will become negative. If this situation may occur in your use case, then you need to think about how to handle that (e.g. clipping, line breaking, whatever...).
  10. The view vector defines the direction of the one axis of your camera system which passes through your center of the view. You typically don't want the camera to be tilted, so the viewing vector and the up vector is all you need to calculate the orientation matrix of the camera. Sines and cosines are not needed for this step; instead the cross product of vectors is what you need. Several threads in this forums discuss that topic; possible keyword for a search is "look-at matrix". The fact that the projection is orthographic means that all projector line directions are identical to the view vector, so no opening angle is needed. However, also orthographic projection requires a camera position since 2 of its 3 dimensions are not irrelevant (also the 3rd is not irrelevant as soon as you use front-/backplane clipping). You further need to have a scaling, because there is a mapping of units in which the world is defined and units of the screen size. (swiftcoder already mentioned this above.) May be you want to use something like "the entire drawing should be seen" and "looking at the center of the bounding box of the entire drawing" for resolving such values. Only if you have all this stuff, a view transform can be computed. The principles for this kind of math are not really different with respect to different graphic APIs.
  11. Data access layer design for games

    As Shaarigan has written above: "it depends". It is clearly not meaningful to try to solve all problems by fitting into a specific pattern. Instead a software, especially one as complex as a game engine, can only be written by using the particularly right tool for each of the manifold problems. So here are some common situations and possible solutions: In the cross section of posts (and matching my personal opinion, so I may be biased on this), resource loading is a thing that involves approx. four instances which together define what I usually call the resource manager: * the manager's frontend interface and processing logic (further called simply "frontend") * a resource cache buried within the manager * a resource loader buried within the manager * a memory manager to deal with memory allocation (if you want to do this) The concept is roughly so that clients send their request for a resource to the manager's frontend (they don't see anything else anyway). The frontend makes a look-up of the requested resource by calling the cache. On a hit the cache returns the requested resource and marks it being used one more time, and the frontend returns that object, of course. On a cache miss the frontend instructs the resource loader to create a new instance of the requested resource. The loader may be complex by itself, e.g. dealing with directories of resources, archives, overriding, ... but that is not of special interest here. Instead we just say that the resource frontend uses the loader frontend to cause a resource loading. However, the loader may use the memory management in case that we want to load resource considering their lifetime (e.g. the player avatar will be long lasting, other stuff exists for the entire lifetime of the level, and other stuff may be swapped out again when a memory budget gets exhausted. However, the loaded instance is then put into the cache and returned to the caller. The above description shows a kind of access control: All things a client can do goes through an (more or less) narrow API regardless how complex the process in the background is. However, is a resource manager already business logic or still data store? IMO it is both: From the outside it looks like a data store, but from the inside it has some logic build in to help a fluent processing of the overall system. Coming now to the second aspect of accessing data as shown in the OP: entity updates. First of, entities are not resources. Instead entities are runtime objects that use resources. E.g. an archer entity uses specific skeleton, mesh, texture, animation, AI resources, maybe even a specific entity template resource. That means w.r.t. separated concerns that something I will call the "entity manager" here is available. This thing is on a higher layer of abstraction, and as such it is allowed to use lower layers. Thinking of layers is another common way to architect software. It means that low level services are used by higher level services but not the other way around. Having an foreach Entity::update() as such is IMO not possible in reality, because there is not such a thing like a single isolated update of each and every entity. The game loop is in fact a sequence of updates run by sub-systems, where each sub-system deals with a particular aspect (for example a particular component of an entity when thinking of ECS; you may want to read Jason Gregory's book chapter about updates, e.g. the excerpt here on gamasutra). It is not meaningful to move the first entity an then run a collision detection and correction, and then step over to the next entity and do the same. Furthermore, static objects do not even have a movement update. Or in other words: different entities have different needs. This leads to another IMO fundamental though: Each aspect should manage their data themselves (some exceptions may exist) and do this in a way that allows themselves to run smoothly. This allows to use data access patterns matching the particular needs of sub-systems, rather than to enforce sub-systems to access data in one globally provided way. tl;dr: Discussing data access should be a topic done narrowed down to one or a few related use cases. Hopefully not just my 2 cents
  12. Unlikely perhaps, but not impossible. There are several examples out there that mix the both concepts, e.g the engine used for Rayman Legends comes to my mind - not to mention my own experiments Of course you still have to iterate, but you have to iterate lists of objects you know the types of. Notice that I've written "only if rendering of all sprites and all meshes can be separated". So in that case you can create a list of sprites and a list of meshes kind of renderables. ------ There are many ways to write a graphic rendering engine. If we move away from "how to solve this particular problem" to "how to design a graphic engine" then - yes - also I would suggest to go another way. My personal solution currently supports two high level rendering APIs that are distinct in the way how graphic is described: the one uses structures comparable with SVG and the other the usual 3D scene stuff. So on this high level I do something similar to what the OP asks for. However, both high level renderers generate the same kind of intermediate graphic rendering jobs that are tagged and enqueued. The sorting then happens on the jobs in the queue, considering the said tags as criteria. So scene description (i.e. how the game objects are organized w.r.t. the scene) and render related sorting are two totally separate things. Finally the low level rendering works on the (unified) jobs and translates them into OpenGL / OpenGL ES / D3D / whatever else. The both high level rendering APIs mentioned above are in so far separate as they work on different kinds of scenes, so to say. Looking at the 3D scene renderer in more depth, in fact it implements not really a full featured renderer. Instead it is something that understands how to process a given render pipeline. Mesh rendering is part of that pipeline. When this part is processed, the renderer "just" looks at the components of the game objects and identifies kind of sub-renderer (i.e. suitable for the requested rendering effect) and calls that as graphic job generator. (This is somewhat similar to Unity3D's renderer components.) This way renderer selection (at this level) is done the same way like all other component look-ups because - well - it is in fact a component look-up. And because I handle components in dedicated sub-systems, that look-up can be done just at instantiation time; after that it is a question of how the sub-system manages the distinct effect renderers. I mentioned all this because it shows a way to decouple several concerns and put them into own software layers, so they can be handled in a way that is suitable for the specific concern. On the other hand ... well, doing it so is a notable couple of work for sure.
  13. Notice please that this for(std::vector<Renderable*>::iterator i = renderList.begin(); i != RenderList.end(); ++i) { //Psuedo code if(*i == Sprite) useSpriteRenderer(i); else if(*i == Mesh) useMeshRenderer(i); } does not really help with avoiding to cast the object, because the particular renderers still just get a Renderable. What alvaro suggests is something that is called a dispatcher (IIRC). It let the objects identify their relevant type by themselves. I.e. with the APIs of the renderers looking like this class SpriteRenderer { public: void render(Sprite* renderable); }; class MeshRenderer { public: void render(Mesh* renderable); }; the dispatching may look like this: class Renderable { public: virtual void callRenderer() = 0; }; class Sprite : public Renderable { public: virtual void callRenderer() override { someSpriteRenderer->render(this); } }; class Mesh : public Renderable { public: virtual void callRenderer() override { someMeshRenderer->render(this); } }; However, now every Renderable need to have access to the belonging renderer instance to work. This can be solved by passing all defined renderers: struct Renderers { SpriteRenderer* theSpriteRenderer; MeshRenderer* theMeshRenderer; }; class Renderable { public: virtual void callRenderer(const Renderers* renderers) = 0; }; class SpriteRenderer : public Renderable { public virtual void callRenderer(const Renderers* renderers) override { assert(renderers!=nullptr && renderers->spriteRenderer!=nullptr); renderers->spriteRenderer->render(this); } }; ... This way is IMO definitely to be preferred in comparison to a singleton concept, because it allows to prepare the renderers accordingly for the upcoming rendering pass. The dispatch concept can also be used for what Kylotan is suggesting but only if rendering all sprites and rendering all meshes can be separated. Then instead of dispatching on each rendering run through, the Renderable can register right after instantiation with the renderer that is suitable for it.
  14. C++ Figuring out where we are on a curvy path

    But is also not too bad for just 10 points and a densely packed array of distances (i.e. cache friendly accesses). There may be some improvements If the path consists of notably more points: a) You may store not the distance from one point to the next but from the very beginning to the particular point, i.e. an array of ever increasing distance values. This would simply allow for binary search. b) If the point of interest does not wildly jump back and forth on the path but "wanders" in some orderly manner (and "moving an entity along" seems me to be such a thing), then the previously found position (i.e. its index) can be stored and be used as the starting point of a linear search in the next iteration. Usually only one or just a few search steps will be necessary. You may further know the direction of search (up or down the path) in front of the search.
  15. ^^which is absolutely okay, because pre-processed data allows for lowest load times. A solution is to think of the array of vertexes as a blob instead of structured data. You actually need to have some metadata, namely a specification of the vertex structure and the size (in bytes) of the vertex data. The specification can be s simple as an enum or as complex as a full-featured structural description. However, you'll use the specification then for both the buffer allocation and, if needed, for an overlay to access the vertexes on the CPU side. I exactly do that for resource management. Resources are stored with metadata used just for loading, metadata that describe the resource (this is actually a serialization), and usually one or more blobs of data. The resource loader interprets its own metadata to understand what to load, uses a deserializer to load the resource metadata, and loads the blob as - well - just a blob.