RobMaddisonMember Since 28 Oct 2007
Offline Last Active Today, 05:46 PM
- Group Members
- Active Posts 711
- Profile Views 6,015
- Submitted Links 0
- Member Title Member
- Age Age Unknown
- Birthday Birthday Unknown
Posted by RobMaddison on 05 December 2014 - 09:11 AM
To compute acceleration and therefore velocity, I'm using the standard slope calc:
Acc = grav * sin(theta)
I then add or subtract the friction x force normal calc (mu * grav * cos(theta)) depending on whether the character is going down a slope or up. At the moment, the slope is calculated along the edge of the snowboard so it doesn't matter which way it is facing, it's either going down a slope with positive acceleration or up a slope with negative acceleration.
I've cancelled out mass here, theta is the angle of the slope with reference to the ground plane and mu is the coefficient for friction - I've currently got that at 0.2
This looks like it works quite nicely, when the character is moving down a slope, I increase the velocity by the acceleration by my delta time and it appears pretty natural. The only point it looks a little unnatural is when the character is travelling pretty fast down a slope that faces another opposite slope. He almost goes the same height as it came from on the upward slope and I wouldn't expect this. I would have thought that he would slow down a lot quicker.
So my question is am I calculating this correctly? Is it enough to apply negative acceleration and negative friction or should I be applying some further force when slowing the character going up an incline from kinetic energy?
Posted by RobMaddison on 27 November 2014 - 08:33 AM
My animation system is almost complete but I've come across an issue that I haven't been able to get any guidance on from Google.
For synchronisation purposes, I keep my animation lengths normalized between 0 and 1 and have a normalized speed/factor per animation which effectively allows me take the correct frame. The normalized speed factor is 1/animation length in seconds.
This allows me to synchronise two animations with different lengths together by adjusting the 'speed' of one to match the other. For example, going from walk to run, I lerp between the speed of the walk animation and the speed of the run animation. Having foot positions in both animations at 0 and 50% means that they line up perfectly and my character can smoothly transition from walk to run as slowly or quickly as he likes. This is standard stuff I think.
My animation blend tree functionality works fine so far, I can setup an arbitrarily complex blend tree by having each blend node take in 2 or more inputs - an input can be an animation clip or another blend node. Using a blend parameter at each blend node to cover the range of thresholds (ala Unity mecanim), I can smoothly blend between the 2-n inputs.
My problem is that my synchronisation only works for transitions between states, so going from a non-blend tree walk animation state to a non-blend tree run animation and I'd now like to use it in my blend tree.
In order to compute the lerp between the normalized speed for clip1 and the normalized speed for clip2, you need to know both normalized speeds. How would this work if you had a blend node two inputs and both were blend nodes?
So I have the following:
\- input 1 Blend tree
\- input 1 clip walk left
\- input 2 clip walk forward
\- input 3 clip walk right
\- input 2 Blend tree
\- input 1 clip run left
\- input 2 clip run forward
\- input 3 clip run right
All 3 walk clips in input 1 are the same length and all 3 run clips in input 2 are the same length but the walk lengths are longer than the run lengths.
In order to calculate a pose using a blend tree with synchronisation, I need to know the normalized speeds of each side of the tree at each level. I've been going round in circles on this trying to visualise it in my head and on paper and the only way I can see to tackle it would be to do it in two passes. Once to compute the normalized speeds down through the tree and secondly to go through and use those speeds on the actual clips.
This could get even more confusing if at a level further down the tree, the inputs of a blend node have different normalized speeds (as per the root node of the earlier example).
Is there an easier way? It feels overly complex to me and like I might be missing a trick.
Posted by RobMaddison on 06 November 2014 - 09:35 AM
Then not only are you not drawing your player twice, you're ensuring you won't get any skin coming through clothing. I would imagine your base player mesh would be naked, then you can cater for someone wearing jeans with no top - if your application requires that kind of thing.
This is what I'm planning to do for my character
Posted by RobMaddison on 29 September 2014 - 01:25 AM
Posted by RobMaddison on 25 September 2014 - 03:52 AM
I've taken the position data and yaw/heading angle from the root bone on each animation and inserted it into each clip as a separate stream of data. When I'm blending between a clip that has motion data and one that doesn't (e.g. In-place idling), I simply transform the non-motion data pose to match that of the other one, then blend. At the end of the transition, I transform and rotate the character to match that of the motion data - works great. My character now turns on the spot and when he stops turning, he nicely blends back into idling facing exactly the right direction - it actually looks pretty natural for a first attempt. I now just need to have a look at foot placement but I'm really happy with it so far.
Thanks again for all your help
Posted by RobMaddison on 24 March 2014 - 05:50 AM
that would be quite wasteful, you'd have to re-implement quite some stuff cross platform and while you can write the editor no different than any other engine-using-client, with a different language you always need some wrapper, either written by hand or sometimes tools, but those often fail and/or are slowing builds down.I've written previously the editor side for my engine using java+jni and later c# (I'm not the best with those two languages probably), now I'm back to c++ again and it's somehow simpler and faster to work. (the downside is that writing the UI parts becomes a little bit of an overhead compared to especially c#).I think that was also the argument why Tim Sweeney dropped his 'baby' "Unrealscript" in favour of just using one language across all engine parts. you guys make it tempting to license it for a month to get hands on it, I'd especially be curious how they've written the editor (as that's always my weak point). what UI lib are they actually using?
I also forgot to mention, I was quite surprised by the fact that the whole editor is in c++. I thought these big PIE (play in editor) engines generally write their editor tools in C# and use interop to embed the c++ engine into it. Very interesting, all of the editor stuff is just #defined in.
True, I had only experimented with my engine running inside a c# client, the interop was ok for the limited functionality I put in but it would be much easier if it's all in the same language. As you say, writing UI components isn't the quickest thing to get right but then I guess when you've got a whole team just working on the UI, things will be easier,
I was interested in the UI part too, more so than doing what I thought I'd do first (which was a search for 'DrawIndexedPrimitive'!). I haven't explored it too much but as far as I can see, their UI lib is just built into the editor code. The use widgets which form the basis of each control and they just override the OnPaint method and add 'draw elements' which can be anything from lines to solid blocks of colour or graphics, etc.
They use some hard-coded code style I've not seen before to build the UI 'forms' which looks kind of like nested method calls followed by multiple nested arrays but each section starts of with a '+'. From memory I can't remember the exact syntax but it looks quite odd to me. Perhaps it's something new in the latest version of c++?
Posted by RobMaddison on 24 March 2014 - 01:32 AM
Is that a rhetorical question or are you saying my question is stupid?Stupid decision here, you will never have what UE4 gives you and with the quality he gives you.
Will I dump my own engine now and use theirs? Probably not as I'm quite far into it and I don't want to see all that work go to waste
Either way, I get great enjoyment from working on my own engine and your statement didn't need saying - if you feel you need to inform people that a AAA game engine would give more functionality and quality than one developer can produce on his own, I think you maybe need to hang about in the beginners section more. Although that's probably slightly condescending even to beginners...
Posted by RobMaddison on 23 March 2014 - 04:16 PM
The thing that surprised me the most though was the amount of straight pointers being passed about. In my engine almost every pointer uses boost::shared_ptr, think I only saw areas in the module manager using smart pointers. They do have some form of garbage collection though so perhaps they handle pointers with their own referencing.
Also noticed a lack of STL, they have their own classes.
All in all, well worth $19, if only to finally see how the "experts" do it. Their project setup is also great. Plenty to learn from.
Will I dump my own engine now and use theirs? Probably not as I'm quite far into it and I don't want to see all that work go to waste
Posted by RobMaddison on 17 October 2013 - 01:36 AM
I'd love to have a look at the code of some of the AAA games out at the moment, especially the likes of COD. I guess part of the fun of game development for me is trying to work out how how things are done and how to do my own interpretation, I generally only buy a game if I want to see how it works - I spend most of my time in corners looking at detail or how they've done shadows, etc. I bought the PC version of Crysis a while back and I've never actually played the game, I spent all my time in the sandbox
Posted by RobMaddison on 16 October 2013 - 04:38 PM
So I re-ask my original question: what metrics are we using for defining "complicated"?
I spent a good half an hour or so trying to work out why it wasn't building out of the box in 2005 and some of the files I found myself in were heavy in asm and used lots of SSE (I guess) or SIMD stuff I've never come across.
I've just had another good look through and I guess along with the asm, they use very short variable names which, to me, always makes things look more complicated. I guess the question I was asking was are AAA games flooded with asm and things like that?
Things under BT_USE_NEON appear to use lots of calls I've never heard of, I guess it was just unfamiliarity that phased me. I'm back to being unphased for my own project
Posted by RobMaddison on 15 October 2013 - 01:53 PM
For a few days I've been weighing up the pros and cons of doing my own physics or using something like bullet. If I do my own, obviously it'll get pretty complex but if I can't model different parts of the snowboard in a middleware physics engine I might have to consider my own cut down version.
Posted by RobMaddison on 10 October 2013 - 01:16 PM
Polygon joins: if you don't disguise joins, they can completely ruin a scene. I mean a boulder on a terrain needs to either have foliage hiding the joins or some clever texturing.
Too much bloom: I think this can make a scene look too 'bloomy'.. It's almost like soft focus on a film when they want to make someone look prettier than they are.
Tearing: this is a huge no no for me, I refuse to play a game with it and it frustrates me that the developers have obviously put too much in and still release it with tearing instead of pulling things back.
Badly coloured smoke effects: Smoke that just doesn't match the scenery colour-wise is inexcusable
Badly rendered smoke effects: in the real world, smoke doesn't have hard edges
Badly rendered billboards: if you're using billboards to cheat, use them sparingly otherwise this cab look more cheap than realistic,
Collision: a person walking into a wall and continuing to walk just looks wrong - along with artefacts sticking into/out of things
Add chaff: scattering a few little rocks here and about and using decals can greatly help realism
Texturing: I'm more pleased to see cleverly placed textures than hi res ones, i.e. keep repeating textures to a minimum.
I think in general, if an effect doesn't look good, eg billboards a grass, rethink it or take it out
Posted by RobMaddison on 06 October 2013 - 08:49 AM
But to answer your question, I'd try to keep it 20-50%. You need some room for characters, special effects, and post-processing.
The budget always varies from game to game. You often always can't lock down your budgets until you've implemented the whole workload and then started to optimize, cut-back and balance them together... Or you often rely on experience from previous (similar) games to set your starting budgets.
Maybe you want 30% for characters, maybe 10%. Maybe 50% for post, maybe 10% :-/
Often I've seen environments and characters combined at ~25% and post at 50%, but on other games that could be flipped.
What kind of game is it, what camera angles/distances, and what else needs to be drawn?
It's a snowboarding/skiing game. I'll need to draw other static objects like instanced trees, huts, jumps, etc, one main character skinned, probably max of 2 or 3 other close characters plus up to a dozen or other lower detail skinned characters. Draw distance is fairly crucial and needs to be, at times, as far as the eye can see. I've developed a crude dynamic pvs method which I need to redevelop but works ok for now.
I'm currently rendering at an average of around 2ms up to an absolute maximum of 5ms to draw the entire 4096x4096 terrain with all texturing. based on the comments that feels like a pretty good start
Posted by RobMaddison on 29 August 2013 - 04:30 AM
I'm just wondering what the best practice is with chunked terrains and component entity systems. Should each chunk be an entity in itself? Or should the whole terrain be an entity?
I'm trying to streamline and refactor my rendering process, and indeed, design parts of it that haven't been done yet. I have a sandbox project that doesn't use my new architecture and it has my terrain system in it, which is honed and extremely efficient. Porting it to my new game engine is throwing up lots of design decisions. My rendering engine essentially works sequentially through a vector of render 'tokens' which allow me to sort it on various criteria. My main question is "should" my terrain parts (i.e. chunks) be just another render token in the list or should I contain my specialised terrain rendering code in its own render method.
It feels wrong to keep it in its own method, but its quadtree and relational nature doesn't really lend itself to a linear list of completely unrelated render tokens. My set up is essentially like this at the moment:
Entity (contains standard orientation data)
---> RenderableComponent (there is a link to this object in each 'RenderToken' which just holds the sortable key, the material and link)
---> SkeletonComponent (this is just the skeleton data)
---> MeshComponent (this is just the mesh data)
---> AnimatorComponent (this builds skeletal poses based on animation data)
My entity system doesn't really have 'systems' that control the entities/components, rather the functionality exists within the components. This was an early design decision that I quite like but it's easily changable.
So how would you go about moving a quadtree-based terrain system into this architecture? Keep it a self-contained terrain system or integrate it into the pipeline properly?