I do a similar style terrain to what you're proposing. I stitch terrain patches with different lods together in the vertex shader by simply moving the edge-in-question vertices of the higher LOD patch to match those of the lower LOD patch. Performance is great.
I have made one or two posts on the subject on this forum a few years ago, I'll see if I can dig them out
This is essentially two 8192x8192 renders each frame (along with the rest of the scene of course). It works ok but does slow things down a bit.
Would it be straightforward to skip the copy operation, and just ping-pong between a pair of pre-allocated textures? Binding the second texture ought to be a lot cheaper than copying it first, seeing as you'd save half your fill-rate.
Took me a few seconds to work out what you meant there, but implemented that this morning and it worked perfectly, saving half my operation time. Brilliant idea, thank you Swiftcoder.
It only took 10 minutes to implement too, double whammy..!
I can see this 'confusion' from both sides. L.Spiro, with respect, you seem to get frustrated quite quickly - your posts are obviously useful and you are clearly a very experienced and talented developer, but ranting and being sarcastic/condescending and telling people they are wrong and you are right is no fun for anyone to read, especially, I expect, the OP.
OP, your method of fixed distance shadow map cascades does of course work, I've done it myself with good results, but I think as someone pointed out, you need to render your shadow map(s) to a render target to see why you're getting such low resolution. As someone mentioned, you'll probably find that your house appears, in the first shadow map, too small and so you need to more tightly encompass the light frustum around the object(s)/house for each cascade. When your close up objects appear small in the first cascade, you should be able to envisage that any reference to the shadow map texels used in the final image are going to be big, hence your 'jaggies'. Shadow map texels is the key here.
Before changing your method to use Hodgman's, which of course will also work, I would also suggest rendering your shadow maps out to check what you're seeing from the light's view and then adjust the light's frustum accordingly to encompass a smaller area on the first cascade, thence making the house render into the shadow map bigger, then the same for the other cascades. Doing this will no doubt, as L.Spiro and Hodgman point out, bring all 3 cascades into the picture regardless of how close you are to the house (unless you're super close like zooming in to one of the window ledges).
Also, I don't actually think your shadow maps look that bad for the method you're using - you will get finer resolution for the same viewpoint using Hodgman's method because in your image, your camera isn't that close to the house so a tighter frustum on your first cascade using a fixed distance will probably not even be used (nothing is close enough to the camera).
Most importantly, let's all get along and be nice to each other - that's half the reason this website is the best forum for game dev.
I did some profiling, switched over to using a pool of textures and I still had stuttering. Then I had a look at my surface locking code and realised I was not supplying any flags to LockRect. Passing D3DLOCK_DISCARD has fixed the issue.
I'm not entirely sure why this has fixed it but I'm back to smooth rendering.
Thanks again for all your thoughts and ideas, much appreciated.
I'm not 100% sure of why, but it appeared that my local shader objects were being somehow shared as both views were using the same underlying renderer. In theory this should have worked with one renderer but clearly it doesn't so I'll need to do some refactoring and give each view its own renderer instead of sharing it - but very happy to see two perfectly rendered views side by side!
I'm using newtons laws in my snowboarding game to move my character down the slopes. I've considered using a physics engine but I don't I need that just yet.
To compute acceleration and therefore velocity, I'm using the standard slope calc:
Acc = grav * sin(theta)
I then add or subtract the friction x force normal calc (mu * grav * cos(theta)) depending on whether the character is going down a slope or up. At the moment, the slope is calculated along the edge of the snowboard so it doesn't matter which way it is facing, it's either going down a slope with positive acceleration or up a slope with negative acceleration.
I've cancelled out mass here, theta is the angle of the slope with reference to the ground plane and mu is the coefficient for friction - I've currently got that at 0.2
This looks like it works quite nicely, when the character is moving down a slope, I increase the velocity by the acceleration by my delta time and it appears pretty natural. The only point it looks a little unnatural is when the character is travelling pretty fast down a slope that faces another opposite slope. He almost goes the same height as it came from on the upward slope and I wouldn't expect this. I would have thought that he would slow down a lot quicker.
So my question is am I calculating this correctly? Is it enough to apply negative acceleration and negative friction or should I be applying some further force when slowing the character going up an incline from kinetic energy?
My animation system is almost complete but I've come across an issue that I haven't been able to get any guidance on from Google.
For synchronisation purposes, I keep my animation lengths normalized between 0 and 1 and have a normalized speed/factor per animation which effectively allows me take the correct frame. The normalized speed factor is 1/animation length in seconds.
This allows me to synchronise two animations with different lengths together by adjusting the 'speed' of one to match the other. For example, going from walk to run, I lerp between the speed of the walk animation and the speed of the run animation. Having foot positions in both animations at 0 and 50% means that they line up perfectly and my character can smoothly transition from walk to run as slowly or quickly as he likes. This is standard stuff I think.
My animation blend tree functionality works fine so far, I can setup an arbitrarily complex blend tree by having each blend node take in 2 or more inputs - an input can be an animation clip or another blend node. Using a blend parameter at each blend node to cover the range of thresholds (ala Unity mecanim), I can smoothly blend between the 2-n inputs.
My problem is that my synchronisation only works for transitions between states, so going from a non-blend tree walk animation state to a non-blend tree run animation and I'd now like to use it in my blend tree.
In order to compute the lerp between the normalized speed for clip1 and the normalized speed for clip2, you need to know both normalized speeds. How would this work if you had a blend node two inputs and both were blend nodes?
So I have the following:
\- input 1 Blend tree
\- input 1 clip walk left
\- input 2 clip walk forward
\- input 3 clip walk right
\- input 2 Blend tree
\- input 1 clip run left
\- input 2 clip run forward
\- input 3 clip run right
All 3 walk clips in input 1 are the same length and all 3 run clips in input 2 are the same length but the walk lengths are longer than the run lengths.
In order to calculate a pose using a blend tree with synchronisation, I need to know the normalized speeds of each side of the tree at each level. I've been going round in circles on this trying to visualise it in my head and on paper and the only way I can see to tackle it would be to do it in two passes. Once to compute the normalized speeds down through the tree and secondly to go through and use those speeds on the actual clips.
This could get even more confusing if at a level further down the tree, the inputs of a blend node have different normalized speeds (as per the root node of the earlier example).
Is there an easier way? It feels overly complex to me and like I might be missing a trick.
I would go with your first idea and split your mesh up into manageable chunks:
Then not only are you not drawing your player twice, you're ensuring you won't get any skin coming through clothing. I would imagine your base player mesh would be naked, then you can cater for someone wearing jeans with no top - if your application requires that kind of thing.
Apologies for bringing this back to the top, I just wanted to close the topic off. I managed to clear the 'glitch' - unfortunately I don't know why it was happening but I refactored out all my 'prototype' code and laid everything out much neater and it solved the issue. I've got my little character wandering about the screen now with some nicely blended animations.