Jump to content

  • Log In with Google      Sign In   
  • Create Account

haegarr

Member Since 10 Oct 2005
Offline Last Active Today, 07:58 AM

#5199950 intersection of two lines

Posted by haegarr on 25 December 2014 - 04:47 AM

[...] but I can't understand how it works. Can someone please try and explain it?

The method is this: There a re 2 line segments p0p1 and p2p3 given. They can be expressed by using a ray equation (i.e. directed line) when the independent variable of the ray is restricted. In this case

     r01( s ) := p0 + s * ( p1 - p0 )   w/   0 <= s <= 1

     r23( t ) := p2 + t * ( p3 - p2 )   w/   0 <= t <= 1

(maybe in the OP s and t are exchanged; I'm too lazy to actually compute the result).
 
For the firt computation steps, you ignore the restriction, and ask for where the both rays are equal
     r01( s ) = r23( t )
 
If you solve this for s and t (it is a linear system with 2 unknowns and 2 equations in 2D), you finally have to check whether both s and t fall in the originally defined restriction.
 
It works perfectly. [...]
No, it does not, at least not on general: It breaks if the 2 lines are parallel / anti-parallel! In that case you'll get a division by zero when computing the value of ip.



#5198908 Animation questions (many questions :P)[SOLVED]

Posted by haegarr on 18 December 2014 - 02:59 AM


[...] I'll try to debugg each of those options, but it will probably take some time (there's a lot of things that could be going wrong for each of those, with a lot of options for some of them).

Yeah, that's the reason why I spoke of a very simple manually made (i.e. hardcoded data) model, if possible. If you have a knowingly well defined model, any exporter / importer problem would be excluded. If that simple model renders well (without animation), I would test some different bone settings (still without animation) to ensure that the skinning works well. If also this is okay, then I'd add some simple animation. At the very end, I'd enable the importer and test with more complex set up.




#5198748 Animation questions (many questions :P)[SOLVED]

Posted by haegarr on 17 December 2014 - 06:07 AM

A thought about the proceeding: I would go with at most 4 bones per vertex for now. If that works, going for more is a question of refactoring. I would also consider to build up a test scene manually, so having a chance to simplify the model to the bare needs. While having an overview is is fine for knowing where to go, a non-incremental developing makes failure tracking difficult.

 

For now: Good luck! :)




#5198736 Animation questions (many questions :P)[SOLVED]

Posted by haegarr on 17 December 2014 - 03:24 AM

1: good catch on the timing. [...] Out of curiosity, is there a reason that ought to be throwing the assert? Should I just initialize the clock earlier in the program so there's a larger value passed in on the first iteration? This was one of the quirks of the tutorial I was struggling with.

Assertions are a runtime mechanism to confirm that the state of the program is as expected. WIth assertions inserted, you don't rely on things running well, but check for it. Not all misbehavior of the code can be seen on screen. For example, some mistakes may have a subtle effect that sums up over time. You can distinguish assertions for at least 3 purposes: To check requirements on function arguments at the very beginning of a function, to ensure return values of functions to meet the definition, and to check the general state. So, if an assertion fires, and the implementation of the assertion matches its intention, there is something wrong.

 

With respect to that particular assertion: You have to investigate why it is triggered. Put a if conditional with the inverted condition of the assertion just above the assertion, put a dummy statement into the if-body, and set a breakpoint onto the dummy statement. Then start the debugger and look what value Factor has. It may be that due to numerical imprecision the value is just a tiny bit outside the allowed interval, or else it is much outside. In the former case you suffer from a typical problem when using floating point numbers,  while in the latter case you have a mistake. How to continue depends on what it is.

 

I dislike the clock just reading the game running time (as, unless I'm misunderstanding things, it will just run the animation once at start-up), and want to change it to start a clock on a keypress or event, but I'll run in to the same assert throw, I believe. 
The game running time is a good source for timing animations because it is independent on the frame rate, of course. On every iteration of the main game loop, the game running time is fetched at one point and used as update time during that loop iteration. By this way all animations see the same moment in time / the same delta time. When an animation is started, the current update time is stored as start time for that animation instance. Further, the difference of current update time minus start time gives the local animation time, i.e. one that has value 0 at the beginning of the particular animation playback. This is in principal the time to be used for looking up keyframes.
 
Things get more complicated when animation loops are used. In this case the wrap at the animation loop's end can be used to adapt the start time to get a new one for each iteration. Notice that this adaption is not done by storing the current game running time; you have to consider the offset coming from exceeding the end of the loop.
 
Thing get even more complicated when animation blending comes into play, because animation blending often requires time acceleration. But let this aside for a later inspection.
 
4: I bind the VAO earlier in the program when generating the terrain. Is there any reason to use a second one or rebind anything, or should the first call be fine? Apologies for not including that in the shown code.
It is sufficient to have a single VAO active all the time if you deal with the VBOs in the way existing before the OpenGL 3 era, or anyway if you have just a single VBO. In fact, VAOs are intended to simplify switching of VBOs by having one VAO per configuration, because only 1 OpenGL call is then necessary for switching. This should give some performance boost. Unfortunately, in the past some drivers performed bad on this as sources on the internet have reported. I don't know how it is nowadays.

 

and lastly, changed the size in glVertexAttribPointer to 32, to account for the 8 bones (or should this be 64? I would think it's still 32, since I don't want to pass the entire size of the VertexBoneData, only the half. I really need to devote a few weeks to just trying to nail down some of these details to understand things a little better. To note: I've tried both values.) [...]
What you want to pass is the byte offset from the beginning of the structure to the occurrence of the array of Weights. This is the byte-size of IDs[NUM_BONES_PER_VERTEX], which calculates to NUM_BONES_PER_VERTEX * sizeof(uint). Your best choice is to replace the literal constant by using the offsetof-operator, or at least by using the expression NUM_BONES_PER_VERTEX * sizeof(uint).
 
As said, you rely on uint being 4 bytes in size. That should be considered to by changed, e.g. by using uint32_t, although that isn't a guarantee either. But it gives at least a hint of what you want.
 
[...] Also, should that second value be 8 as well, or is the "size" of the int still 4?:
The value does not denote the size of the int but the count of elements in the vector passed in. That value is allowed to be 1, 2, 3, or else 4, because the vertex stage inputs of the GPU are 4 element vectors (elements not supplied are automatically completed with standard values). So "8" is not an option.

 

I need to update the vertex shader to account for the increase in the number of bones. Simply adding more bones in the shader doesn't work since it's a vec4, so I'll need to adapt that...somehow. I have some work to do in the meantime (real work sadly) but when I'm done I'll try to update that to account for it. Hopefully that will solve the issues. If anyone has an idea how to go about that, I'll likely try to pass in an array of 8 or a 4x4 matrix and pass in 16 bones. Unless there's a nifty vec8 (glsl doesn't seem to think so ). Since it's reading...uniquely...from that map, where it pulls the integer then the float, I'm a little flustered with how to get the data. Simply trying to adapt the shader to read:
layout (location = 5) in int BoneIDs[8];
layout (location = 6) in float Weights[8]; 
results in the error: "warning:  two vertex attribute variables (named Weights[0] and BoneIDs[1]) were assigned to the same generic vertex attribute." I'm hoping I dont' have to suss that map into two separate data sets, though part of my brain would like that very much.

As said above, a vertex input is 4 elements wide. A "location" defines the index of the vertex stage input to use. So each "location" stands for a 4 element vector. When you write

    layout (location = 5) in int BoneIDs[8];

you ask the GPU to reserve 8 inputs beginning at location 5. So this means locations 5, 6, 7, ..., 12, each one used to transfer one element on an int value (the remaining 3 elements being filled up with 0, 0, and 1). Then you specify the Weights in a similar way, asking for locations 6, 7, 8, ... 13. This cannot work and is hence denied by the shader compiler.

 

What you actually want to use is something like

    layout (location = 5) in ivec4 BoneIDs[2];
    layout (location = 7) in vec4 BoneIDs[2];

This means "allocate the vertex stage inputs at locations 5 and 6, each one for a 4 element int vector (giving you 8 integers in total), as well as the vertex stage inputs 7 and 8, each one for a 4 element float vector (giving you 8 floats in total)". This has an impact on the glVertexAttribPointer calls, of course.




#5198532 Animation questions (many questions :P)[SOLVED]

Posted by haegarr on 16 December 2014 - 08:05 AM


I worry a little that I'm setting up layout 5 and 6 (the bone information) incorrectly. I've never set the buffer data using a map, and it's possible I'm going about this incorrectly.

You are probably right with this assumption ;(

 

1.) Your struct VertexBoneData uses uint as type for IDs, and you rely on the uint being 4 bytes in size.

 

2.) The constant NUM_BONES_PER_VERTEX is set to 8. If tightly packed and sizeof(uint) is actually 4, then the block of IDs occupies 8*4=32 bytes, as well as the bock of Weights occupies 32 bytes. However, the offset specified for Weights within the call to the belonging glVertexAttribPointer is just 16, making the second half of IDs being interpreted as weights. This hints at NUM_BONES_PER_VERTEX should be 4 instead. Also the vertex shader script is written so that NUM_BONES_PER_VERTEX should be 4.

 

3.) I don't see a glGenBuffers(1, &boneBuffer) anywhere.

 

4.) I'm also missing a VAO. Although some implementation work without it, the specification for OpenGL 3.x has made it mandatory. You should use it to avoid spurious mistakes.




#5198521 Animation questions (many questions :P)[SOLVED]

Posted by haegarr on 16 December 2014 - 07:15 AM

... If Haegar answers this, I think I owe him my first-born child at this point

Although this is an attractive offer for sure, I have to confess that the amount of code in the OP is a bit ... discouraging. Rumpelstiltskin's deal was easier ;)

 

For now, being obvious due to its bolded typeface, this ...

If i print out the transformations, I do get occasional outputs such as "-9.53674e-07" 

... does not hint at a problem. It is just a kind of writing the number -9.53674*10-7 = 1.0 / ( -9.53674*107 ) = 1.0 / -95367400.0 which is simply close to zero.

 

    float Factor = (AnimationTime- (float)pNodeAnim->mRotationKeys[RotationIndex].mTime) / DeltaTime;

One thing that is at least confusing is that inside the position interpolation code as well as the scaling interpolation code you use the mTime of the rotation keyframe. Well, maybe that the keyframes of position, rotation and scaling are all at the same points in time, but even then you should use mPositionKeys[PositionIndex].mTime and mScalingKeys[ScalingIndex].mTime, resp., for the sake of clarity. If, however, the keyframes are not equally timed, your routines are definitely wrong.




#5198394 I don't know how can I solve that problem in efficient way .

Posted by haegarr on 15 December 2014 - 02:00 PM

Interpret the path as ray r(k) := A + k * (G-A), 0<=k<=1, compute all possible collisions, and choose those with the lowest k.




#5197969 How should i call my classes instead of "xyManager"?

Posted by haegarr on 13 December 2014 - 07:05 AM


ImageCache sounds fine, thanks.

Well ... notice the process: A client requests the resource library (to avoid the word "manager" here ;)) to return a specific resource. The library requests the cache for the resource. The cache does not store it, so returns nil. The library decides which storage allocator to use, and triggers the loader to load the resource by utilizing the storage allocator. The loader investigates its directory structure (keywords are archive file and file overriding) and detects the file. It looks for a suitable file format, requests an importer from it, and calls it to read the resource. The resource is then returned to the library. The library looks which cache policy is installed for that resource, and puts the resource into the belonging cache. It then returns to the calling client.

 

From such a description where the single tasks are sectored, one can see how many instances are actually involved in such a "simple" thing like resource management. Many of them can be agnostic of the actual resource type, some can (or should) not. Almost all of them need not know how the resource is encoded on mass storage. Many aspects can be configured (like the storage allocation policy, the cache policy) or are somewhat runtime dependent (like the file format) or are not relevant to the client (like is the resource cached or not). Implementing all this into a single class is wrong, as well as exposing it into the wild is wrong (*), regardless of how the class is named.

 

You have already mentioned that your process is not that comprehensive, and that you rely on 3rd party libraries. However, the above should demonstrate how a comprehensive solution may look like in detail, especially how many classes are involved in dependence on the inherent tasks.

 

(*) This is why I would not name the facade class "ImageCache".




#5197952 How should i call my classes instead of "xyManager"?

Posted by haegarr on 13 December 2014 - 05:39 AM

The OP is somewhat vague, so I write down some possible pitfalls I see:

 

Your ImageManager may be problematic since it violates the single responsibility principle if implemented as a single class. The single tasks possibly involved here are (a) allocating local storage, (b) reading a resource from mass storage, © uploading the resource, (d) implementing a cache strategy, (e) granting access. Okay, all this together means to manage the resource, but the tasks are too different to be implemented in a single class. Instead, if you understand the manager as a facade granting a small API in front of the wrapped functionality, it is IMHO okay to name it "manager" if you wish.

 

Your BuildingManager and UnitManager are even more problematic. They implement some kind of scene setting but also implement collision detection and update services!? These tasks are not related and IMHO can not even be subsumed in a single "management".

 

It reads like your EventManager can be named EventProcessor. However, if it executes commands that are generated by events, it is perhaps a CommandProcessor supplied by an EventDispatcher? Because "handle Commands and other events" doesn't seem me to fit in a single class.

 

 

EDIT: wintertime was a bit faster than me, and I second those post :)




#5197940 How do I add items into inventory of different types?

Posted by haegarr on 13 December 2014 - 03:20 AM

Using inheritance just for the sake of distinguishing kinds of items does not really make sense. If this is the only intention, then a solution with an enum is just fine, also since you're anchoring the existence of armor already in the interface of Actor.

 

However, one problem that arises is that something like "is armor" or "is weapon" is a property but not a type, and should be understood as "is useable as armor" and so on. For example, a chain mail is primarily an armor but it is also a wearable, while a wooden shield is an armor but not a wearable, for example. Or, a sword is primarily a weapon, but a hammer may be a weapon but also a tool. Well, you game may ignore the manifold usability of items, of course, as many games do so.

 

Now, to come back to your question, OOP like inheritance is well used if the functional differences are hidden behind a common interface. For example, if Item class provides a Item::Usability enumeration as well as a re-implementable request interface similar to this 

class Item {

    enum class Usability : uint32_t {
         Armor,
         Weapon,
         Tool
    };

    virtual int getIntProperty( Usability forUsability ) const {
        return 0;
    }

};

a client can ask for any defined property by using a single generic function and without ever knowing what concrete type the item is of. The advantages of such a solution are

* adding a property does not have an effect on eventually existing derived classes, because the interface does not change,

* the returned number represents an effect, may be 0 (means "not usable in the requested way), positive or even negative,

* a single item can still respond to a couple of properties with a number unequal to 0,

The disadvantages are

* properties are typed, and for each type you need to have a function,

* a virtual function is used (well, this is not a disadvantage in principle, but it drains performance if called very often per frame).

 

This is, however, only one solution. Others exist as well.




#5197130 Drawing many enemies

Posted by haegarr on 09 December 2014 - 05:00 AM

I already searched OGL tips and tricks, though I found nothing useful.

Isn't there anything mentioned about "batching"?

 

 

2. Decrease number of OGL calls by setting the texture, shader, vao and other uniforms once and then call for each enemy the set-matrix and draw-this-call.

From this description it isn't clear what you do exactly. Let us assume that you set a matrix as uniform variable, hence pushing presumably 3*4*4=48 bytes to the GPU. Now, using indexed triangles, a quad with texture co-ordinates may consume 4*(2*4+2*2)=48 bytes (the indices can be applied in static draw mode). So going the "old school" way of batching allows for a single call to push the dynamic VBO (which itself can be done in a way avoiding the one or other synchronization) and a single call to draw, without causing more traffic (in fact, the traffic will be less due to data concentration). The only thing is that the CPU needs to perform the transformation.

 

This is one attempt that works on ES 2.0 hardware, too. However, there may be further optimizations.




#5196835 How to scale UI without losing image quality?

Posted by haegarr on 07 December 2014 - 02:23 PM


I was looking around and I found an image format called SVG. from what I understand, it scales really well with any resolution. What's your opinion about it? and why isn't it replacing PNG as a file format? 

Because you're comparing apples and oranges. PNG is a raster image format, and SVG is a vector image format. Present day GPUs are build to work with raster graphics. A vector graphics image does not scale to resolutions, because it is given resolution independent and then rasterized in a given resolution. This rasterization is an additional process necessary for rendering. If you're looking for raster graphics in several resolutions, then search for multi-resolition images (MRI). The MIPmaps of textures are a kind of MRI.




#5196826 How to scale UI without losing image quality?

Posted by haegarr on 07 December 2014 - 01:53 PM


So my question is. Is there another way to take something like the image above and scale it without losing any quality?

The 9-slice is a method that is easily controllable. However, it is not impossible to generalize the method. All you need is to define a mesh and write a corresponding routine that knows which portions of the mesh are allowed to scale and which not.




#5195842 OpenGL mipmaps with texture arrays

Posted by haegarr on 02 December 2014 - 02:34 AM

I thought that may be the case. That's the second parameter, correct? When I try to increase it, everything turns solid black/white or just really....weird (kind of like an old crt monitor). Does it need to be a specific number (such as divisible by 4 or anything)? I've tried various values but all have the same effect.

It is the 2nd parameter, yes. Its value must not exceed the hardware limit as can be requested by using GL_TEXTURE_MAX_LEVEL with glGetTexParameteriv, but I doubt that you actually exceed it. It must further be so that the mipmap level with size 1 by 1 is not generated more often than once (the reference says that it must not be greater than log2(max(width,height))+1 which, in your case, computes to 11). Besides that, it may be any positive integer number.

 

EDIT: Oops: glGetTexParameter naturally gives you a set-up parameter, not a hardware limit. 




#5195838 OpenGL mipmaps with texture arrays

Posted by haegarr on 02 December 2014 - 02:10 AM

You are allocating just one mipmap level with the call to glTexStorage3D. Then you fill the base level with the loaded image, and then you ask to generate mipmaps. Because there is no level allocated above the base level (only those will be generated by glGenerateMipmap), nothing happens in the end. So try to increase the level parameter of glTexStorage3D.






PARTNERS