Jump to content
  • Advertisement

george7378

Member
  • Content count

    304
  • Joined

  • Last visited

Community Reputation

1443 Excellent

About george7378

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Art
    Design
    Programming

Recent Profile Visitors

17787 profile views
  1. george7378

    An Algorithm for Infinite Worlds

    Hey everyone - I wrote a little article recently which covers some of the aspects of creating adaptive terrain: http://www.gkristiansen.co.uk/2018/04/an-algorithm-for-infinite-worlds.html It's more of a high-level description of the idea rather than a technical article and I created it mainly to practice my writing, but there's also a GitHub link to a full implementation at the bottom. I've been working on and off with my virtual terrain project for a year now, and I just thought I'd share the results in case anyone over here might find my simple example code to be helpful. Thanks for taking a look!
  2. Hi everyone, I have a quadtree terrain system which uses camera distance to decide when a node should split into 4 children (or merge the children). It's basically 'split if distance between camera and the centre of the node is less than a constant * the node's edge length'. This has worked OK so far, but now I've changed my algorithm to have many more vertices per node. This means that I want to get a lot closer to a node before deciding to split - maybe something like 0.1*edgeLength. I can no longer just check my distance from the centre as it won't work when I approach the edges of the node. I thought I could check the distance from each vertex in each node and find the smallest, but that seems like overkill (I'd be looping over every vertex in my terrain). So my question is - what's a more elegant way to decide whether to split/merge my quadtree nodes? Thanks for the help!
  3. george7378

    Neighbour nodes on a cube

    Thanks again for the inputs I decided to add properties to each root node which describe the rotations that need to be done in order to extract the correct neighbour node in each direction. These are passed around to any relevant child nodes which also touch the root border, and the end result is something that requires minimal 'if' statements and does most of the work itself provided you set the root nodes up properly.
  4. Hi again, another quadtree question - this time about neighbour finding in a set of 6 planar quadtrees forming a cube. At the moment, I've got a single quadtree in the XZ plane so finding neighbouring nodes of the same level is easy - e.g. If a top-left node wants to know it's left-hand neighbour, all it has to do is get a reference to the top-right child of its parent's left-hand neighbour. This still works for nodes contained on the same side of the cube, but it fails if I want to know a neighbour across the boundary between two different quadtrees in the cube. It doesn't work because the definition of up, left, down or right depends on which border you're querying across so asking for your left-hand neighbor using the above logic will only return the correct cell for some of the faces. Can anyone think of a way to get this simple system to work for a cube, or will it have to be more complicated? I can't think of a way to make it work without having horrible edge cases resulting in a very inflexible system. Perhaps I will just have to ignore neighbours across the boundaries and hence have small gaps in the terrain there. Thanks for the help
  5. Thanks for the reply - yes, I thought about what would happen if I used the same arbitrary vectors for every position, which is why I choose between the X direction or the Y direction depending on the Y component of the position. I was thinking this would ensure that the tangents always form an orthonormal basis around the point on the sphere. it will result in different tangents for different positions, but I was thinking this wouldn't matter as long as I was taking samples across two perpendicular directions in the tangent plane.
  6. A quick addition - I tried creating my own adaptation: public Vector3 GetNormalFromFiniteOffset(Vector3 location, float sampleOffset) { Vector3 normalisedLocation = Vector3.Normalize(location); Vector3 arbitraryUnitVector = Math.Abs(normalisedLocation.Y) > 0.999f ? Vector3.UnitX : Vector3.UnitY; Vector3 tangentVector1 = Vector3.Cross(arbitraryUnitVector, normalisedLocation); tangentVector1.Normalize(); Vector3 tangentVector2 = Vector3.Cross(tangentVector1, normalisedLocation); tangentVector2.Normalize(); float hL = GetHeight(location - tangentVector1*sampleOffset); float hR = GetHeight(location + tangentVector1*sampleOffset); float hD = GetHeight(location - tangentVector2*sampleOffset); float hU = GetHeight(location + tangentVector2*sampleOffset); Vector3 normal = 2*normalisedLocation + (hL - hR)*tangentVector1 + (hD - hU)*tangentVector2; normal.Normalize(); return normal; } I can't test it yet, but I wonder if anyone thinks this looks like a decent approach, or are there obvious issues with how I'm doing this? Thanks again!
  7. Hi everyone, I'm currently adapting a planar quadtree terrain system to handle spherical terrain (i.e. I want to render planets as opposed to just an endless terrain stretching in the X-Z plane). A the moment, I use this algorithm to calculate the normal for a given point on the terrain from the height information (the heights are generated using simplex noise): public Vector3 GetNormalFromFiniteOffset(float x, float z, float sampleOffset) { float hL = GetHeight(x - sampleOffset, z); float hR = GetHeight(x + sampleOffset, z); float hD = GetHeight(x, z - sampleOffset); float hU = GetHeight(x, z + sampleOffset); Vector3 normal = new Vector3(hL - hR, 2, hD - hU); normal.Normalize(); return normal; } The above works fine for my planar quadtree, but of course it won't work for spherical terrain because it assumes the Y direction is always up. I guess I need to move the calculation into the plane which lies at a tangent on the sphere for the point I'm evaluating. I was wondering if anyone knows of a good or proven way to transform this calculation for any point on a sphere, or if there's a better way to calculate the normal for a point on a sphere which has been radially displaced using a noise-based height field? I'm using XNA by the way. Thanks very much for looking!
  8. george7378

    Adapting planar quadtree for spheres

    Thanks for the reply - makes sense! For the second point, I guess that means a given LOD in the quadtree will have a slightly differing vertex density across the sphere due to the deformation when the tiles are projected onto the sphere? I suppose it's a minor problem though.
  9. Adapting planar quadtree for sphere Hi everyone, I'm quite used to working with quadtrees for flat, 'infinite' terrain where I just have a coherent noise function that returns a height for a given x, z position and quadtree cells are split based on camera distance to their centre (where the centre takes into account the height as well). I'd like to expand this to work for spherical planets using the common method of having 6 quadtrees which form a cube, then normalising the vertex positions to make a sphere. I have a few questions though which none of the tutorials for this seem to answer: What coordinates should I use to sample my noise function? The raw (x, y, z) coordinates of the vertex positions in the cube, or should I use the same coordinates but after I've 'spherified' them? What distance should I use to decide if quadtree nodes should split/merge? Should I use distance from camera of the node in its cube position, or should I work out where the node's centre is on the sphere and then use the distance to that? Thanks for the help
  10. Hi everyone, I have a water (pixel) shader which works like this: - Sample a normal map twice using two sets of moving coordinates. Average these to get the normal. - Sample reflection and refraction maps using screen-space coordinates which are perturbed slightly according to the normal. - Add a 'murky' colour to the refraction colour based on depth. - Interpolate between reflection and refraction maps based on the fresnel term (i.e. how shallow the viewing angle is). - Add specular highlights using the sampled normal. http://imgur.com/ALYmL53.jpg http://imgur.com/2cGc8kg.jpg I'm liking the results but I feel like there should be a diffuse lighting element too, so that waves created by the normal map can be seen when you aren't looking in the direction of the Sun. This is simple enough on a solid object (I'm already doing it on the terrain in the above pictures) but I'm not sure of the most accurate way to do it for water. Should I apply a diffuse factor to both reflections and refractions? Should I do it the same way as I would for solid objects? Anyone with experience creating their own water shader, some input would be very helpful :) Thanks.
  11. george7378

    What are you working on?

    I recently picked up a failed attempt from last year to create an 'infinite' procedural terrain. Really pleased with how it's going this time round:
  12. george7378

    Depth in water shader

    I think the issue is that the depth doesn't have enough resolution when stored in the alpha channel. The effect seems to be working when the camera is right next to the water plane, but since my far clip plane is pretty distant, it causes problems when the camera is not close to the water. Not sure of the best way to continue with this - perhaps I should calculate the 'fog factor' in the terrain shader based on actual world positions, but then I'd have to add a ray-plane intersection to calculate the distance to the water. EDIT: I was already doing clipping when drawing the refraction map so I just decided to use the distance below the clip plane and pass that through in the alpha channel (after dividing by a scale factor to get it in a reasonable range). Produces a decent result:
  13. george7378

    Depth in water shader

    Thanks for the reply. I think for XNA the projection gives a z-coordinate from 0 to 1, with 0 being the near plane (?): https://msdn.microsoft.com/en-us/library/bb195672.aspx
  14. Hi everyone, I have a planar water shader which I've been using for quite a while. It follows the standard format of interpolating between refraction/reflection maps to create the overall colour and using normal maps to perturb the texture coordinates and create specular reflection. It produces decent results but I'd like to add some depth-based effects such as weighting towards a 'murky' colour for pixels in the refraction map which have a larger volume of water between them and the camera. Something a bit like this: https://i.ytimg.com/vi/UkskiSza4p0/maxresdefault.jpg I tried the following approach to determine the water depth for each pixel of the refraction map: - As I'm rendering terrain under the water, the alpha channel is not going to be used (i.e. I won't have translucent terrain). So, in the terrain pixel shader, I used the alpha channel to return the screen-space depth, i.e: // ...pixel shader code to calculate the terrain colour here... return float4(finalTerrainColour.rgb, psInput.ScreenPos.z/psInput.ScreenPos.w);- Then in the water shader, when I sample my refraction map containing the terrain rendered below the water, I find the difference between the alpha channel (where the terrain depth is stored) and the depth of the current pixel on the water plane, i.e: // ...pixel shader code to sample refraction map colour... refractionMapColour = lerp(refractionMapColour, murkyWaterColour, saturate(depthDropoffScale*(refractionMapColour.a - psInput.ScreenPos.z/psInput.ScreenPos.w)));This didn't seem to work and produced some strange results, for example when the camera was just above the water plane, it would show the 'murky' colour only, even though the terrain was quite a way below the water (and hence refractionMapColour.a should have been much larger than the depth of the pixel on the water plane). So, can anyone spot an obvious problem with the way I tried above, and are there any better ways of going about such depth-based effects? Thanks for the help :)
  15. Hi everyone, I'm pretty happy with my system for making an 'infinite' terrain using quadtree level of detail. At the moment though, the terrain is bare with no vegetation. I'd like to place trees, etc... on my landscape in areas where the slope is sufficiently small but I'm having trouble coming up with the best way to do it. Here's my best idea right now: When a new quadtree node is created, if the slope at the central point is below a certain value, this node is eligible for vegetation. Choose a set number of random locations within the node and add trees. This means that a maximum number of trees exists per node and that larger, less detailed nodes will have a lower density of trees. However, there are problems, for example in large nodes, the centre point may not represent the terrain that the node encompasses. Also, it would be nice to have a system that will place the trees in the same 'random' locations every time a node in a particular location with a particular level of detail is created. Perhaps you can share your ideas for how to automatically place trees? Thanks in advance :)
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!